00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2039 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3299 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.138 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.139 The recommended git tool is: git 00:00:00.139 using credential 00000000-0000-0000-0000-000000000002 00:00:00.140 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.171 Fetching changes from the remote Git repository 00:00:00.172 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.231 > git --version # 'git version 2.39.2' 00:00:00.231 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.252 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.252 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.337 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.347 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.358 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:06.358 > git config core.sparsecheckout # timeout=10 00:00:06.368 > git read-tree -mu HEAD # timeout=10 00:00:06.383 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:06.408 Commit message: "packer: Add bios builder" 00:00:06.409 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:06.508 [Pipeline] Start of Pipeline 00:00:06.518 [Pipeline] library 00:00:06.519 Loading library shm_lib@master 00:00:06.520 Library shm_lib@master is cached. Copying from home. 00:00:06.535 [Pipeline] node 00:00:06.545 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.546 [Pipeline] { 00:00:06.555 [Pipeline] catchError 00:00:06.556 [Pipeline] { 00:00:06.568 [Pipeline] wrap 00:00:06.577 [Pipeline] { 00:00:06.585 [Pipeline] stage 00:00:06.587 [Pipeline] { (Prologue) 00:00:06.755 [Pipeline] sh 00:00:07.041 + logger -p user.info -t JENKINS-CI 00:00:07.059 [Pipeline] echo 00:00:07.061 Node: GP11 00:00:07.068 [Pipeline] sh 00:00:07.375 [Pipeline] setCustomBuildProperty 00:00:07.384 [Pipeline] echo 00:00:07.385 Cleanup processes 00:00:07.390 [Pipeline] sh 00:00:07.668 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.668 437322 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.681 [Pipeline] sh 00:00:07.962 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.963 ++ grep -v 'sudo pgrep' 00:00:07.963 ++ awk '{print $1}' 00:00:07.963 + sudo kill -9 00:00:07.963 + true 00:00:07.975 [Pipeline] cleanWs 00:00:07.983 [WS-CLEANUP] Deleting project workspace... 00:00:07.983 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.989 [WS-CLEANUP] done 00:00:07.991 [Pipeline] setCustomBuildProperty 00:00:08.001 [Pipeline] sh 00:00:08.287 + sudo git config --global --replace-all safe.directory '*' 00:00:08.385 [Pipeline] httpRequest 00:00:08.401 [Pipeline] echo 00:00:08.402 Sorcerer 10.211.164.101 is alive 00:00:08.406 [Pipeline] httpRequest 00:00:08.410 HttpMethod: GET 00:00:08.411 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.411 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.421 Response Code: HTTP/1.1 200 OK 00:00:08.421 Success: Status code 200 is in the accepted range: 200,404 00:00:08.422 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:14.193 [Pipeline] sh 00:00:14.479 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:14.495 [Pipeline] httpRequest 00:00:14.526 [Pipeline] echo 00:00:14.528 Sorcerer 10.211.164.101 is alive 00:00:14.536 [Pipeline] httpRequest 00:00:14.540 HttpMethod: GET 00:00:14.541 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:14.541 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:14.558 Response Code: HTTP/1.1 200 OK 00:00:14.558 Success: Status code 200 is in the accepted range: 200,404 00:00:14.559 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:07.910 [Pipeline] sh 00:01:08.195 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:10.739 [Pipeline] sh 00:01:11.023 + git -C spdk log --oneline -n5 00:01:11.023 dbef7efac test: fix dpdk builds on ubuntu24 00:01:11.023 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:11.023 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:11.023 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:11.023 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:11.036 [Pipeline] } 00:01:11.053 [Pipeline] // stage 00:01:11.062 [Pipeline] stage 00:01:11.065 [Pipeline] { (Prepare) 00:01:11.083 [Pipeline] writeFile 00:01:11.100 [Pipeline] sh 00:01:11.385 + logger -p user.info -t JENKINS-CI 00:01:11.398 [Pipeline] sh 00:01:11.716 + logger -p user.info -t JENKINS-CI 00:01:11.728 [Pipeline] sh 00:01:12.012 + cat autorun-spdk.conf 00:01:12.012 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.012 SPDK_TEST_NVMF=1 00:01:12.012 SPDK_TEST_NVME_CLI=1 00:01:12.012 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.012 SPDK_TEST_NVMF_NICS=e810 00:01:12.012 SPDK_RUN_UBSAN=1 00:01:12.012 NET_TYPE=phy 00:01:12.019 RUN_NIGHTLY=1 00:01:12.024 [Pipeline] readFile 00:01:12.050 [Pipeline] withEnv 00:01:12.052 [Pipeline] { 00:01:12.065 [Pipeline] sh 00:01:12.350 + set -ex 00:01:12.350 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:12.350 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.350 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.350 ++ SPDK_TEST_NVMF=1 00:01:12.350 ++ SPDK_TEST_NVME_CLI=1 00:01:12.350 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.350 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.350 ++ SPDK_RUN_UBSAN=1 00:01:12.350 ++ NET_TYPE=phy 00:01:12.350 ++ RUN_NIGHTLY=1 00:01:12.350 + case $SPDK_TEST_NVMF_NICS in 00:01:12.350 + DRIVERS=ice 00:01:12.350 + [[ tcp == \r\d\m\a ]] 00:01:12.350 + [[ -n ice ]] 00:01:12.350 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.350 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.350 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:12.350 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.350 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.350 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.350 + true 00:01:12.350 + for D in $DRIVERS 00:01:12.350 + sudo modprobe ice 00:01:12.350 + exit 0 00:01:12.360 [Pipeline] } 00:01:12.377 [Pipeline] // withEnv 00:01:12.383 [Pipeline] } 00:01:12.398 [Pipeline] // stage 00:01:12.406 [Pipeline] catchError 00:01:12.408 [Pipeline] { 00:01:12.421 [Pipeline] timeout 00:01:12.421 Timeout set to expire in 50 min 00:01:12.422 [Pipeline] { 00:01:12.434 [Pipeline] stage 00:01:12.435 [Pipeline] { (Tests) 00:01:12.446 [Pipeline] sh 00:01:12.726 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.726 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.726 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.727 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.727 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.727 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.727 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.727 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.727 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.727 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.727 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.727 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.727 + source /etc/os-release 00:01:12.727 ++ NAME='Fedora Linux' 00:01:12.727 ++ VERSION='38 (Cloud Edition)' 00:01:12.727 ++ ID=fedora 00:01:12.727 ++ VERSION_ID=38 00:01:12.727 ++ VERSION_CODENAME= 00:01:12.727 ++ PLATFORM_ID=platform:f38 00:01:12.727 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:12.727 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.727 ++ LOGO=fedora-logo-icon 00:01:12.727 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:12.727 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.727 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:12.727 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.727 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.727 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.727 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:12.727 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.727 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:12.727 ++ SUPPORT_END=2024-05-14 00:01:12.727 ++ VARIANT='Cloud Edition' 00:01:12.727 ++ VARIANT_ID=cloud 00:01:12.727 + uname -a 00:01:12.727 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:12.727 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.664 Hugepages 00:01:13.665 node hugesize free / total 00:01:13.665 node0 1048576kB 0 / 0 00:01:13.665 node0 2048kB 0 / 0 00:01:13.665 node1 1048576kB 0 / 0 00:01:13.665 node1 2048kB 0 / 0 00:01:13.665 00:01:13.665 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.665 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:13.665 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:13.665 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:13.665 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:13.665 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:13.665 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:13.665 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:13.665 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:13.665 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:13.665 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:13.665 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:13.665 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:13.665 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:13.665 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:13.665 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:13.665 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:13.665 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:13.665 + rm -f /tmp/spdk-ld-path 00:01:13.665 + source autorun-spdk.conf 00:01:13.665 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.665 ++ SPDK_TEST_NVMF=1 00:01:13.665 ++ SPDK_TEST_NVME_CLI=1 00:01:13.665 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.665 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.665 ++ SPDK_RUN_UBSAN=1 00:01:13.665 ++ NET_TYPE=phy 00:01:13.665 ++ RUN_NIGHTLY=1 00:01:13.665 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.665 + [[ -n '' ]] 00:01:13.665 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.665 + for M in /var/spdk/build-*-manifest.txt 00:01:13.665 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.665 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.665 + for M in /var/spdk/build-*-manifest.txt 00:01:13.665 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.665 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.665 ++ uname 00:01:13.665 + [[ Linux == \L\i\n\u\x ]] 00:01:13.665 + sudo dmesg -T 00:01:13.923 + sudo dmesg --clear 00:01:13.923 + dmesg_pid=438097 00:01:13.923 + [[ Fedora Linux == FreeBSD ]] 00:01:13.923 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.923 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.923 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.923 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.923 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.923 + sudo dmesg -Tw 00:01:13.923 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.923 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.923 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.923 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.923 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.923 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.923 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.923 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.923 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.923 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.923 Test configuration: 00:01:13.923 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.923 SPDK_TEST_NVMF=1 00:01:13.923 SPDK_TEST_NVME_CLI=1 00:01:13.923 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.923 SPDK_TEST_NVMF_NICS=e810 00:01:13.923 SPDK_RUN_UBSAN=1 00:01:13.923 NET_TYPE=phy 00:01:13.923 RUN_NIGHTLY=1 01:09:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.923 01:09:05 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.923 01:09:05 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.924 01:09:05 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.924 01:09:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.924 01:09:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.924 01:09:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.924 01:09:05 -- paths/export.sh@5 -- $ export PATH 00:01:13.924 01:09:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.924 01:09:05 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.924 01:09:05 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:13.924 01:09:05 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1722035345.XXXXXX 00:01:13.924 01:09:05 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1722035345.zryuMj 00:01:13.924 01:09:05 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:13.924 01:09:05 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:01:13.924 01:09:05 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.924 01:09:05 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.924 01:09:05 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.924 01:09:05 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:13.924 01:09:05 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:13.924 01:09:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.924 01:09:05 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:13.924 01:09:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:13.924 01:09:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:13.924 01:09:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.924 01:09:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:13.924 Fri Jul 26 11:09:05 PM UTC 2024 00:01:13.924 01:09:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:13.924 LTS-60-gdbef7efac 00:01:13.924 01:09:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:13.924 01:09:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:13.924 01:09:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:13.924 01:09:05 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:13.924 01:09:05 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:13.924 01:09:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.924 ************************************ 00:01:13.924 START TEST ubsan 00:01:13.924 ************************************ 00:01:13.924 01:09:05 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:13.924 using ubsan 00:01:13.924 00:01:13.924 real 0m0.000s 00:01:13.924 user 0m0.000s 00:01:13.924 sys 0m0.000s 00:01:13.924 01:09:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:13.924 01:09:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.924 ************************************ 00:01:13.924 END TEST ubsan 00:01:13.924 ************************************ 00:01:13.924 01:09:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:13.924 01:09:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:13.924 01:09:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:13.924 01:09:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:13.924 01:09:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:13.924 01:09:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:13.924 01:09:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:13.924 01:09:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:13.924 01:09:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:13.924 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:13.924 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.184 Using 'verbs' RDMA provider 00:01:24.736 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:34.719 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:34.719 Creating mk/config.mk...done. 00:01:34.719 Creating mk/cc.flags.mk...done. 00:01:34.719 Type 'make' to build. 00:01:34.719 01:09:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:34.719 01:09:25 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:34.719 01:09:25 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:34.719 01:09:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.719 ************************************ 00:01:34.719 START TEST make 00:01:34.719 ************************************ 00:01:34.719 01:09:25 -- common/autotest_common.sh@1104 -- $ make -j48 00:01:34.719 make[1]: Nothing to be done for 'all'. 00:01:42.887 The Meson build system 00:01:42.887 Version: 1.3.1 00:01:42.887 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:42.887 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:42.887 Build type: native build 00:01:42.887 Program cat found: YES (/usr/bin/cat) 00:01:42.887 Project name: DPDK 00:01:42.887 Project version: 23.11.0 00:01:42.887 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:42.887 C linker for the host machine: cc ld.bfd 2.39-16 00:01:42.887 Host machine cpu family: x86_64 00:01:42.887 Host machine cpu: x86_64 00:01:42.887 Message: ## Building in Developer Mode ## 00:01:42.887 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:42.887 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:42.887 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:42.887 Program python3 found: YES (/usr/bin/python3) 00:01:42.887 Program cat found: YES (/usr/bin/cat) 00:01:42.887 Compiler for C supports arguments -march=native: YES 00:01:42.887 Checking for size of "void *" : 8 00:01:42.887 Checking for size of "void *" : 8 (cached) 00:01:42.887 Library m found: YES 00:01:42.887 Library numa found: YES 00:01:42.887 Has header "numaif.h" : YES 00:01:42.887 Library fdt found: NO 00:01:42.887 Library execinfo found: NO 00:01:42.887 Has header "execinfo.h" : YES 00:01:42.887 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:42.888 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:42.888 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:42.888 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:42.888 Run-time dependency openssl found: YES 3.0.9 00:01:42.888 Run-time dependency libpcap found: YES 1.10.4 00:01:42.888 Has header "pcap.h" with dependency libpcap: YES 00:01:42.888 Compiler for C supports arguments -Wcast-qual: YES 00:01:42.888 Compiler for C supports arguments -Wdeprecated: YES 00:01:42.888 Compiler for C supports arguments -Wformat: YES 00:01:42.888 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:42.888 Compiler for C supports arguments -Wformat-security: NO 00:01:42.888 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.888 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:42.888 Compiler for C supports arguments -Wnested-externs: YES 00:01:42.888 Compiler for C supports arguments -Wold-style-definition: YES 00:01:42.888 Compiler for C supports arguments -Wpointer-arith: YES 00:01:42.888 Compiler for C supports arguments -Wsign-compare: YES 00:01:42.888 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:42.888 Compiler for C supports arguments -Wundef: YES 00:01:42.888 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.888 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:42.888 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:42.888 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.888 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:42.888 Program objdump found: YES (/usr/bin/objdump) 00:01:42.888 Compiler for C supports arguments -mavx512f: YES 00:01:42.888 Checking if "AVX512 checking" compiles: YES 00:01:42.888 Fetching value of define "__SSE4_2__" : 1 00:01:42.888 Fetching value of define "__AES__" : 1 00:01:42.888 Fetching value of define "__AVX__" : 1 00:01:42.888 Fetching value of define "__AVX2__" : (undefined) 00:01:42.888 Fetching value of define "__AVX512BW__" : (undefined) 00:01:42.888 Fetching value of define "__AVX512CD__" : (undefined) 00:01:42.888 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:42.888 Fetching value of define "__AVX512F__" : (undefined) 00:01:42.888 Fetching value of define "__AVX512VL__" : (undefined) 00:01:42.888 Fetching value of define "__PCLMUL__" : 1 00:01:42.888 Fetching value of define "__RDRND__" : 1 00:01:42.888 Fetching value of define "__RDSEED__" : (undefined) 00:01:42.888 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:42.888 Fetching value of define "__znver1__" : (undefined) 00:01:42.888 Fetching value of define "__znver2__" : (undefined) 00:01:42.888 Fetching value of define "__znver3__" : (undefined) 00:01:42.888 Fetching value of define "__znver4__" : (undefined) 00:01:42.888 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:42.888 Message: lib/log: Defining dependency "log" 00:01:42.888 Message: lib/kvargs: Defining dependency "kvargs" 00:01:42.888 Message: lib/telemetry: Defining dependency "telemetry" 00:01:42.888 Checking for function "getentropy" : NO 00:01:42.888 Message: lib/eal: Defining dependency "eal" 00:01:42.888 Message: lib/ring: Defining dependency "ring" 00:01:42.888 Message: lib/rcu: Defining dependency "rcu" 00:01:42.888 Message: lib/mempool: Defining dependency "mempool" 00:01:42.888 Message: lib/mbuf: Defining dependency "mbuf" 00:01:42.888 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:42.888 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:42.888 Compiler for C supports arguments -mpclmul: YES 00:01:42.888 Compiler for C supports arguments -maes: YES 00:01:42.888 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:42.888 Compiler for C supports arguments -mavx512bw: YES 00:01:42.888 Compiler for C supports arguments -mavx512dq: YES 00:01:42.888 Compiler for C supports arguments -mavx512vl: YES 00:01:42.888 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:42.888 Compiler for C supports arguments -mavx2: YES 00:01:42.888 Compiler for C supports arguments -mavx: YES 00:01:42.888 Message: lib/net: Defining dependency "net" 00:01:42.888 Message: lib/meter: Defining dependency "meter" 00:01:42.888 Message: lib/ethdev: Defining dependency "ethdev" 00:01:42.888 Message: lib/pci: Defining dependency "pci" 00:01:42.888 Message: lib/cmdline: Defining dependency "cmdline" 00:01:42.888 Message: lib/hash: Defining dependency "hash" 00:01:42.888 Message: lib/timer: Defining dependency "timer" 00:01:42.888 Message: lib/compressdev: Defining dependency "compressdev" 00:01:42.888 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:42.888 Message: lib/dmadev: Defining dependency "dmadev" 00:01:42.888 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:42.888 Message: lib/power: Defining dependency "power" 00:01:42.888 Message: lib/reorder: Defining dependency "reorder" 00:01:42.888 Message: lib/security: Defining dependency "security" 00:01:42.888 Has header "linux/userfaultfd.h" : YES 00:01:42.888 Has header "linux/vduse.h" : YES 00:01:42.888 Message: lib/vhost: Defining dependency "vhost" 00:01:42.888 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:42.888 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:42.888 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:42.888 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:42.888 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:42.888 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:42.888 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:42.888 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:42.888 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:42.888 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:42.888 Program doxygen found: YES (/usr/bin/doxygen) 00:01:42.888 Configuring doxy-api-html.conf using configuration 00:01:42.888 Configuring doxy-api-man.conf using configuration 00:01:42.888 Program mandb found: YES (/usr/bin/mandb) 00:01:42.888 Program sphinx-build found: NO 00:01:42.888 Configuring rte_build_config.h using configuration 00:01:42.888 Message: 00:01:42.888 ================= 00:01:42.888 Applications Enabled 00:01:42.888 ================= 00:01:42.888 00:01:42.888 apps: 00:01:42.888 00:01:42.888 00:01:42.888 Message: 00:01:42.888 ================= 00:01:42.888 Libraries Enabled 00:01:42.888 ================= 00:01:42.888 00:01:42.888 libs: 00:01:42.888 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:42.888 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:42.888 cryptodev, dmadev, power, reorder, security, vhost, 00:01:42.888 00:01:42.888 Message: 00:01:42.888 =============== 00:01:42.888 Drivers Enabled 00:01:42.888 =============== 00:01:42.888 00:01:42.888 common: 00:01:42.888 00:01:42.888 bus: 00:01:42.888 pci, vdev, 00:01:42.888 mempool: 00:01:42.888 ring, 00:01:42.888 dma: 00:01:42.888 00:01:42.888 net: 00:01:42.888 00:01:42.888 crypto: 00:01:42.888 00:01:42.888 compress: 00:01:42.888 00:01:42.888 vdpa: 00:01:42.888 00:01:42.888 00:01:42.888 Message: 00:01:42.888 ================= 00:01:42.888 Content Skipped 00:01:42.888 ================= 00:01:42.888 00:01:42.888 apps: 00:01:42.888 dumpcap: explicitly disabled via build config 00:01:42.888 graph: explicitly disabled via build config 00:01:42.888 pdump: explicitly disabled via build config 00:01:42.888 proc-info: explicitly disabled via build config 00:01:42.888 test-acl: explicitly disabled via build config 00:01:42.888 test-bbdev: explicitly disabled via build config 00:01:42.888 test-cmdline: explicitly disabled via build config 00:01:42.888 test-compress-perf: explicitly disabled via build config 00:01:42.888 test-crypto-perf: explicitly disabled via build config 00:01:42.888 test-dma-perf: explicitly disabled via build config 00:01:42.888 test-eventdev: explicitly disabled via build config 00:01:42.888 test-fib: explicitly disabled via build config 00:01:42.888 test-flow-perf: explicitly disabled via build config 00:01:42.888 test-gpudev: explicitly disabled via build config 00:01:42.888 test-mldev: explicitly disabled via build config 00:01:42.888 test-pipeline: explicitly disabled via build config 00:01:42.888 test-pmd: explicitly disabled via build config 00:01:42.888 test-regex: explicitly disabled via build config 00:01:42.888 test-sad: explicitly disabled via build config 00:01:42.888 test-security-perf: explicitly disabled via build config 00:01:42.888 00:01:42.888 libs: 00:01:42.888 metrics: explicitly disabled via build config 00:01:42.888 acl: explicitly disabled via build config 00:01:42.888 bbdev: explicitly disabled via build config 00:01:42.888 bitratestats: explicitly disabled via build config 00:01:42.888 bpf: explicitly disabled via build config 00:01:42.888 cfgfile: explicitly disabled via build config 00:01:42.888 distributor: explicitly disabled via build config 00:01:42.888 efd: explicitly disabled via build config 00:01:42.888 eventdev: explicitly disabled via build config 00:01:42.888 dispatcher: explicitly disabled via build config 00:01:42.888 gpudev: explicitly disabled via build config 00:01:42.888 gro: explicitly disabled via build config 00:01:42.888 gso: explicitly disabled via build config 00:01:42.888 ip_frag: explicitly disabled via build config 00:01:42.888 jobstats: explicitly disabled via build config 00:01:42.888 latencystats: explicitly disabled via build config 00:01:42.889 lpm: explicitly disabled via build config 00:01:42.889 member: explicitly disabled via build config 00:01:42.889 pcapng: explicitly disabled via build config 00:01:42.889 rawdev: explicitly disabled via build config 00:01:42.889 regexdev: explicitly disabled via build config 00:01:42.889 mldev: explicitly disabled via build config 00:01:42.889 rib: explicitly disabled via build config 00:01:42.889 sched: explicitly disabled via build config 00:01:42.889 stack: explicitly disabled via build config 00:01:42.889 ipsec: explicitly disabled via build config 00:01:42.889 pdcp: explicitly disabled via build config 00:01:42.889 fib: explicitly disabled via build config 00:01:42.889 port: explicitly disabled via build config 00:01:42.889 pdump: explicitly disabled via build config 00:01:42.889 table: explicitly disabled via build config 00:01:42.889 pipeline: explicitly disabled via build config 00:01:42.889 graph: explicitly disabled via build config 00:01:42.889 node: explicitly disabled via build config 00:01:42.889 00:01:42.889 drivers: 00:01:42.889 common/cpt: not in enabled drivers build config 00:01:42.889 common/dpaax: not in enabled drivers build config 00:01:42.889 common/iavf: not in enabled drivers build config 00:01:42.889 common/idpf: not in enabled drivers build config 00:01:42.889 common/mvep: not in enabled drivers build config 00:01:42.889 common/octeontx: not in enabled drivers build config 00:01:42.889 bus/auxiliary: not in enabled drivers build config 00:01:42.889 bus/cdx: not in enabled drivers build config 00:01:42.889 bus/dpaa: not in enabled drivers build config 00:01:42.889 bus/fslmc: not in enabled drivers build config 00:01:42.889 bus/ifpga: not in enabled drivers build config 00:01:42.889 bus/platform: not in enabled drivers build config 00:01:42.889 bus/vmbus: not in enabled drivers build config 00:01:42.889 common/cnxk: not in enabled drivers build config 00:01:42.889 common/mlx5: not in enabled drivers build config 00:01:42.889 common/nfp: not in enabled drivers build config 00:01:42.889 common/qat: not in enabled drivers build config 00:01:42.889 common/sfc_efx: not in enabled drivers build config 00:01:42.889 mempool/bucket: not in enabled drivers build config 00:01:42.889 mempool/cnxk: not in enabled drivers build config 00:01:42.889 mempool/dpaa: not in enabled drivers build config 00:01:42.889 mempool/dpaa2: not in enabled drivers build config 00:01:42.889 mempool/octeontx: not in enabled drivers build config 00:01:42.889 mempool/stack: not in enabled drivers build config 00:01:42.889 dma/cnxk: not in enabled drivers build config 00:01:42.889 dma/dpaa: not in enabled drivers build config 00:01:42.889 dma/dpaa2: not in enabled drivers build config 00:01:42.889 dma/hisilicon: not in enabled drivers build config 00:01:42.889 dma/idxd: not in enabled drivers build config 00:01:42.889 dma/ioat: not in enabled drivers build config 00:01:42.889 dma/skeleton: not in enabled drivers build config 00:01:42.889 net/af_packet: not in enabled drivers build config 00:01:42.889 net/af_xdp: not in enabled drivers build config 00:01:42.889 net/ark: not in enabled drivers build config 00:01:42.889 net/atlantic: not in enabled drivers build config 00:01:42.889 net/avp: not in enabled drivers build config 00:01:42.889 net/axgbe: not in enabled drivers build config 00:01:42.889 net/bnx2x: not in enabled drivers build config 00:01:42.889 net/bnxt: not in enabled drivers build config 00:01:42.889 net/bonding: not in enabled drivers build config 00:01:42.889 net/cnxk: not in enabled drivers build config 00:01:42.889 net/cpfl: not in enabled drivers build config 00:01:42.889 net/cxgbe: not in enabled drivers build config 00:01:42.889 net/dpaa: not in enabled drivers build config 00:01:42.889 net/dpaa2: not in enabled drivers build config 00:01:42.889 net/e1000: not in enabled drivers build config 00:01:42.889 net/ena: not in enabled drivers build config 00:01:42.889 net/enetc: not in enabled drivers build config 00:01:42.889 net/enetfec: not in enabled drivers build config 00:01:42.889 net/enic: not in enabled drivers build config 00:01:42.889 net/failsafe: not in enabled drivers build config 00:01:42.889 net/fm10k: not in enabled drivers build config 00:01:42.889 net/gve: not in enabled drivers build config 00:01:42.889 net/hinic: not in enabled drivers build config 00:01:42.889 net/hns3: not in enabled drivers build config 00:01:42.889 net/i40e: not in enabled drivers build config 00:01:42.889 net/iavf: not in enabled drivers build config 00:01:42.889 net/ice: not in enabled drivers build config 00:01:42.889 net/idpf: not in enabled drivers build config 00:01:42.889 net/igc: not in enabled drivers build config 00:01:42.889 net/ionic: not in enabled drivers build config 00:01:42.889 net/ipn3ke: not in enabled drivers build config 00:01:42.889 net/ixgbe: not in enabled drivers build config 00:01:42.889 net/mana: not in enabled drivers build config 00:01:42.889 net/memif: not in enabled drivers build config 00:01:42.889 net/mlx4: not in enabled drivers build config 00:01:42.889 net/mlx5: not in enabled drivers build config 00:01:42.889 net/mvneta: not in enabled drivers build config 00:01:42.889 net/mvpp2: not in enabled drivers build config 00:01:42.889 net/netvsc: not in enabled drivers build config 00:01:42.889 net/nfb: not in enabled drivers build config 00:01:42.889 net/nfp: not in enabled drivers build config 00:01:42.889 net/ngbe: not in enabled drivers build config 00:01:42.889 net/null: not in enabled drivers build config 00:01:42.889 net/octeontx: not in enabled drivers build config 00:01:42.889 net/octeon_ep: not in enabled drivers build config 00:01:42.889 net/pcap: not in enabled drivers build config 00:01:42.889 net/pfe: not in enabled drivers build config 00:01:42.889 net/qede: not in enabled drivers build config 00:01:42.889 net/ring: not in enabled drivers build config 00:01:42.889 net/sfc: not in enabled drivers build config 00:01:42.889 net/softnic: not in enabled drivers build config 00:01:42.889 net/tap: not in enabled drivers build config 00:01:42.889 net/thunderx: not in enabled drivers build config 00:01:42.889 net/txgbe: not in enabled drivers build config 00:01:42.889 net/vdev_netvsc: not in enabled drivers build config 00:01:42.889 net/vhost: not in enabled drivers build config 00:01:42.889 net/virtio: not in enabled drivers build config 00:01:42.889 net/vmxnet3: not in enabled drivers build config 00:01:42.889 raw/*: missing internal dependency, "rawdev" 00:01:42.889 crypto/armv8: not in enabled drivers build config 00:01:42.889 crypto/bcmfs: not in enabled drivers build config 00:01:42.889 crypto/caam_jr: not in enabled drivers build config 00:01:42.889 crypto/ccp: not in enabled drivers build config 00:01:42.889 crypto/cnxk: not in enabled drivers build config 00:01:42.889 crypto/dpaa_sec: not in enabled drivers build config 00:01:42.889 crypto/dpaa2_sec: not in enabled drivers build config 00:01:42.889 crypto/ipsec_mb: not in enabled drivers build config 00:01:42.889 crypto/mlx5: not in enabled drivers build config 00:01:42.889 crypto/mvsam: not in enabled drivers build config 00:01:42.889 crypto/nitrox: not in enabled drivers build config 00:01:42.889 crypto/null: not in enabled drivers build config 00:01:42.889 crypto/octeontx: not in enabled drivers build config 00:01:42.889 crypto/openssl: not in enabled drivers build config 00:01:42.889 crypto/scheduler: not in enabled drivers build config 00:01:42.889 crypto/uadk: not in enabled drivers build config 00:01:42.889 crypto/virtio: not in enabled drivers build config 00:01:42.889 compress/isal: not in enabled drivers build config 00:01:42.889 compress/mlx5: not in enabled drivers build config 00:01:42.889 compress/octeontx: not in enabled drivers build config 00:01:42.889 compress/zlib: not in enabled drivers build config 00:01:42.889 regex/*: missing internal dependency, "regexdev" 00:01:42.889 ml/*: missing internal dependency, "mldev" 00:01:42.889 vdpa/ifc: not in enabled drivers build config 00:01:42.889 vdpa/mlx5: not in enabled drivers build config 00:01:42.889 vdpa/nfp: not in enabled drivers build config 00:01:42.889 vdpa/sfc: not in enabled drivers build config 00:01:42.889 event/*: missing internal dependency, "eventdev" 00:01:42.889 baseband/*: missing internal dependency, "bbdev" 00:01:42.889 gpu/*: missing internal dependency, "gpudev" 00:01:42.889 00:01:42.889 00:01:42.889 Build targets in project: 85 00:01:42.889 00:01:42.889 DPDK 23.11.0 00:01:42.889 00:01:42.889 User defined options 00:01:42.889 buildtype : debug 00:01:42.889 default_library : shared 00:01:42.889 libdir : lib 00:01:42.889 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:42.889 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:42.889 c_link_args : 00:01:42.889 cpu_instruction_set: native 00:01:42.889 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:42.889 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:42.889 enable_docs : false 00:01:42.889 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:42.889 enable_kmods : false 00:01:42.889 tests : false 00:01:42.889 00:01:42.889 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.155 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:43.155 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:43.155 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:43.155 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:43.155 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:43.155 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:43.155 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:43.155 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:43.155 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:43.155 [9/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:43.420 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:43.420 [11/265] Linking static target lib/librte_kvargs.a 00:01:43.420 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:43.420 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:43.420 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:43.420 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:43.420 [16/265] Linking static target lib/librte_log.a 00:01:43.420 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:43.420 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:43.420 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:43.420 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:43.679 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:43.941 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.200 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.200 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.200 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:44.200 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:44.200 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:44.200 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.200 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.200 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.200 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:44.200 [32/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.200 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:44.200 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:44.200 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:44.200 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.200 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:44.200 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:44.200 [39/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:44.200 [40/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:44.200 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.200 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:44.200 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.200 [44/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:44.200 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:44.200 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:44.200 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:44.200 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:44.200 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.200 [50/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:44.200 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:44.200 [52/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:44.200 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.200 [54/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:44.200 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:44.200 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:44.200 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:44.200 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.200 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:44.466 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:44.466 [61/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.466 [62/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.466 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.466 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:44.466 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.466 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:44.466 [67/265] Linking static target lib/librte_telemetry.a 00:01:44.466 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:44.466 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:44.466 [70/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:44.466 [71/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:44.466 [72/265] Linking static target lib/librte_pci.a 00:01:44.466 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:44.724 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:44.724 [75/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.724 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:44.724 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:44.724 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:44.724 [79/265] Linking target lib/librte_log.so.24.0 00:01:44.724 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:44.724 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:44.724 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:44.724 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:44.724 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:44.724 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:44.724 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:44.983 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:44.983 [88/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:44.983 [89/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:44.983 [90/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:44.983 [91/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:44.983 [92/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:44.983 [93/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.983 [94/265] Linking target lib/librte_kvargs.so.24.0 00:01:44.983 [95/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:44.983 [96/265] Linking static target lib/librte_ring.a 00:01:44.983 [97/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.244 [98/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.244 [99/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:45.244 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.244 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:45.244 [102/265] Linking static target lib/librte_meter.a 00:01:45.244 [103/265] Linking static target lib/librte_eal.a 00:01:45.244 [104/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:45.244 [105/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:45.244 [106/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:45.244 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.244 [108/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.244 [109/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:45.244 [110/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:45.244 [111/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:45.244 [112/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:45.244 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.244 [114/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:45.244 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:45.244 [116/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:45.244 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.244 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:45.244 [119/265] Linking static target lib/librte_mempool.a 00:01:45.504 [120/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:45.504 [121/265] Linking static target lib/librte_rcu.a 00:01:45.504 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:45.504 [123/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:45.504 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:45.504 [125/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.504 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:45.504 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.504 [128/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.504 [129/265] Linking target lib/librte_telemetry.so.24.0 00:01:45.504 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:45.504 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:45.504 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.766 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:45.766 [134/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.766 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.766 [136/265] Linking static target lib/librte_cmdline.a 00:01:45.766 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.766 [138/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.766 [139/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.766 [140/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:45.766 [141/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.766 [142/265] Linking static target lib/librte_net.a 00:01:45.766 [143/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:45.766 [144/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:45.766 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:45.766 [146/265] Linking static target lib/librte_timer.a 00:01:45.766 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.026 [148/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.026 [149/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.026 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.026 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.026 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.026 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.026 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.286 [155/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.286 [156/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.286 [157/265] Linking static target lib/librte_dmadev.a 00:01:46.286 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.286 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:46.286 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:46.286 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.286 [162/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.286 [163/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:46.286 [164/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:46.286 [165/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:46.286 [166/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.286 [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:46.286 [168/265] Linking static target lib/librte_hash.a 00:01:46.286 [169/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:46.286 [170/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.286 [171/265] Linking static target lib/librte_power.a 00:01:46.545 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.545 [173/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.545 [174/265] Linking static target lib/librte_compressdev.a 00:01:46.545 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.545 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.545 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:46.545 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:46.545 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.545 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:46.545 [181/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.545 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:46.545 [183/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.545 [184/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.545 [185/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.545 [186/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.804 [187/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.804 [188/265] Linking static target lib/librte_mbuf.a 00:01:46.804 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:46.804 [190/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:46.804 [191/265] Linking static target lib/librte_reorder.a 00:01:46.804 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.804 [193/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.804 [194/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.804 [195/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.804 [196/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.804 [197/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.804 [198/265] Linking static target drivers/librte_bus_vdev.a 00:01:46.804 [199/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.804 [200/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.063 [201/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.063 [202/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:47.063 [203/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:47.063 [204/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.063 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:47.063 [206/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:47.063 [207/265] Linking static target lib/librte_security.a 00:01:47.063 [208/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.063 [209/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.063 [210/265] Linking static target drivers/librte_bus_pci.a 00:01:47.063 [211/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.063 [212/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.063 [213/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:47.063 [214/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.063 [215/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.063 [216/265] Linking static target drivers/librte_mempool_ring.a 00:01:47.322 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:47.322 [218/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:47.322 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:47.322 [220/265] Linking static target lib/librte_cryptodev.a 00:01:47.322 [221/265] Linking static target lib/librte_ethdev.a 00:01:47.322 [222/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.580 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.516 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.451 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:51.353 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.353 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.611 [228/265] Linking target lib/librte_eal.so.24.0 00:01:51.611 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:51.611 [230/265] Linking target lib/librte_ring.so.24.0 00:01:51.611 [231/265] Linking target lib/librte_meter.so.24.0 00:01:51.611 [232/265] Linking target lib/librte_pci.so.24.0 00:01:51.611 [233/265] Linking target lib/librte_timer.so.24.0 00:01:51.611 [234/265] Linking target lib/librte_dmadev.so.24.0 00:01:51.611 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:51.870 [236/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:51.870 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:51.870 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:51.870 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:51.870 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:51.870 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:51.870 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:51.870 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:51.870 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:51.870 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:51.870 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:51.870 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:52.128 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:52.128 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:52.128 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:52.128 [251/265] Linking target lib/librte_net.so.24.0 00:01:52.128 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:52.387 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:52.387 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:52.387 [255/265] Linking target lib/librte_hash.so.24.0 00:01:52.387 [256/265] Linking target lib/librte_security.so.24.0 00:01:52.387 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:52.387 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:52.387 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:52.387 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:52.645 [261/265] Linking target lib/librte_power.so.24.0 00:01:55.174 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:55.175 [263/265] Linking static target lib/librte_vhost.a 00:01:55.741 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.000 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:56.000 INFO: autodetecting backend as ninja 00:01:56.000 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:56.936 CC lib/ut_mock/mock.o 00:01:56.936 CC lib/log/log.o 00:01:56.936 CC lib/log/log_flags.o 00:01:56.936 CC lib/log/log_deprecated.o 00:01:56.936 CC lib/ut/ut.o 00:01:56.936 LIB libspdk_ut_mock.a 00:01:56.936 SO libspdk_ut_mock.so.5.0 00:01:56.936 LIB libspdk_log.a 00:01:56.936 LIB libspdk_ut.a 00:01:56.936 SO libspdk_ut.so.1.0 00:01:56.936 SO libspdk_log.so.6.1 00:01:56.936 SYMLINK libspdk_ut_mock.so 00:01:56.936 SYMLINK libspdk_ut.so 00:01:56.936 SYMLINK libspdk_log.so 00:01:57.194 CC lib/dma/dma.o 00:01:57.194 CC lib/ioat/ioat.o 00:01:57.194 CXX lib/trace_parser/trace.o 00:01:57.194 CC lib/util/base64.o 00:01:57.194 CC lib/util/bit_array.o 00:01:57.194 CC lib/util/cpuset.o 00:01:57.194 CC lib/util/crc16.o 00:01:57.194 CC lib/util/crc32.o 00:01:57.194 CC lib/util/crc32c.o 00:01:57.194 CC lib/util/crc32_ieee.o 00:01:57.195 CC lib/util/crc64.o 00:01:57.195 CC lib/util/dif.o 00:01:57.195 CC lib/util/fd.o 00:01:57.195 CC lib/util/file.o 00:01:57.195 CC lib/util/hexlify.o 00:01:57.195 CC lib/util/iov.o 00:01:57.195 CC lib/util/math.o 00:01:57.195 CC lib/util/pipe.o 00:01:57.195 CC lib/util/strerror_tls.o 00:01:57.195 CC lib/util/string.o 00:01:57.195 CC lib/util/uuid.o 00:01:57.195 CC lib/util/fd_group.o 00:01:57.195 CC lib/util/xor.o 00:01:57.195 CC lib/util/zipf.o 00:01:57.195 CC lib/vfio_user/host/vfio_user_pci.o 00:01:57.195 CC lib/vfio_user/host/vfio_user.o 00:01:57.453 LIB libspdk_dma.a 00:01:57.453 SO libspdk_dma.so.3.0 00:01:57.453 SYMLINK libspdk_dma.so 00:01:57.453 LIB libspdk_ioat.a 00:01:57.453 SO libspdk_ioat.so.6.0 00:01:57.453 LIB libspdk_vfio_user.a 00:01:57.453 SO libspdk_vfio_user.so.4.0 00:01:57.453 SYMLINK libspdk_ioat.so 00:01:57.453 SYMLINK libspdk_vfio_user.so 00:01:57.712 LIB libspdk_util.a 00:01:57.712 SO libspdk_util.so.8.0 00:01:57.971 SYMLINK libspdk_util.so 00:01:57.971 CC lib/conf/conf.o 00:01:57.971 CC lib/json/json_parse.o 00:01:57.971 CC lib/rdma/common.o 00:01:57.971 CC lib/vmd/vmd.o 00:01:57.971 CC lib/env_dpdk/env.o 00:01:57.971 CC lib/json/json_util.o 00:01:57.971 CC lib/rdma/rdma_verbs.o 00:01:57.971 CC lib/vmd/led.o 00:01:57.971 CC lib/idxd/idxd.o 00:01:57.971 CC lib/json/json_write.o 00:01:57.971 CC lib/env_dpdk/memory.o 00:01:57.971 CC lib/env_dpdk/pci.o 00:01:57.971 CC lib/idxd/idxd_user.o 00:01:57.971 CC lib/env_dpdk/init.o 00:01:57.971 CC lib/idxd/idxd_kernel.o 00:01:57.971 CC lib/env_dpdk/threads.o 00:01:57.971 CC lib/env_dpdk/pci_ioat.o 00:01:57.971 CC lib/env_dpdk/pci_virtio.o 00:01:57.971 CC lib/env_dpdk/pci_vmd.o 00:01:57.971 CC lib/env_dpdk/pci_idxd.o 00:01:57.971 CC lib/env_dpdk/pci_event.o 00:01:57.971 CC lib/env_dpdk/sigbus_handler.o 00:01:57.971 CC lib/env_dpdk/pci_dpdk.o 00:01:57.971 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:57.971 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:57.971 LIB libspdk_trace_parser.a 00:01:57.971 SO libspdk_trace_parser.so.4.0 00:01:58.251 SYMLINK libspdk_trace_parser.so 00:01:58.251 LIB libspdk_conf.a 00:01:58.251 SO libspdk_conf.so.5.0 00:01:58.251 LIB libspdk_json.a 00:01:58.251 LIB libspdk_rdma.a 00:01:58.251 SYMLINK libspdk_conf.so 00:01:58.251 SO libspdk_rdma.so.5.0 00:01:58.251 SO libspdk_json.so.5.1 00:01:58.529 SYMLINK libspdk_rdma.so 00:01:58.529 SYMLINK libspdk_json.so 00:01:58.529 CC lib/jsonrpc/jsonrpc_server.o 00:01:58.529 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:58.529 CC lib/jsonrpc/jsonrpc_client.o 00:01:58.529 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:58.529 LIB libspdk_idxd.a 00:01:58.529 LIB libspdk_vmd.a 00:01:58.788 SO libspdk_idxd.so.11.0 00:01:58.788 SO libspdk_vmd.so.5.0 00:01:58.788 SYMLINK libspdk_idxd.so 00:01:58.788 SYMLINK libspdk_vmd.so 00:01:58.788 LIB libspdk_jsonrpc.a 00:01:58.788 SO libspdk_jsonrpc.so.5.1 00:01:58.788 SYMLINK libspdk_jsonrpc.so 00:01:59.046 CC lib/rpc/rpc.o 00:01:59.046 LIB libspdk_rpc.a 00:01:59.046 SO libspdk_rpc.so.5.0 00:01:59.304 SYMLINK libspdk_rpc.so 00:01:59.304 CC lib/trace/trace.o 00:01:59.304 CC lib/trace/trace_flags.o 00:01:59.304 CC lib/trace/trace_rpc.o 00:01:59.304 CC lib/notify/notify.o 00:01:59.304 CC lib/sock/sock.o 00:01:59.304 CC lib/notify/notify_rpc.o 00:01:59.304 CC lib/sock/sock_rpc.o 00:01:59.562 LIB libspdk_notify.a 00:01:59.562 SO libspdk_notify.so.5.0 00:01:59.562 LIB libspdk_trace.a 00:01:59.562 SYMLINK libspdk_notify.so 00:01:59.562 SO libspdk_trace.so.9.0 00:01:59.562 SYMLINK libspdk_trace.so 00:01:59.820 LIB libspdk_sock.a 00:01:59.820 SO libspdk_sock.so.8.0 00:01:59.820 CC lib/thread/thread.o 00:01:59.820 CC lib/thread/iobuf.o 00:01:59.820 SYMLINK libspdk_sock.so 00:01:59.820 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:59.820 CC lib/nvme/nvme_ctrlr.o 00:01:59.820 CC lib/nvme/nvme_fabric.o 00:01:59.820 CC lib/nvme/nvme_ns_cmd.o 00:01:59.821 CC lib/nvme/nvme_ns.o 00:01:59.821 CC lib/nvme/nvme_pcie_common.o 00:01:59.821 CC lib/nvme/nvme_pcie.o 00:01:59.821 CC lib/nvme/nvme_qpair.o 00:01:59.821 CC lib/nvme/nvme.o 00:01:59.821 CC lib/nvme/nvme_quirks.o 00:01:59.821 CC lib/nvme/nvme_transport.o 00:01:59.821 CC lib/nvme/nvme_discovery.o 00:01:59.821 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:59.821 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:59.821 CC lib/nvme/nvme_tcp.o 00:01:59.821 CC lib/nvme/nvme_opal.o 00:01:59.821 CC lib/nvme/nvme_io_msg.o 00:01:59.821 CC lib/nvme/nvme_poll_group.o 00:01:59.821 CC lib/nvme/nvme_zns.o 00:01:59.821 CC lib/nvme/nvme_cuse.o 00:01:59.821 CC lib/nvme/nvme_vfio_user.o 00:01:59.821 CC lib/nvme/nvme_rdma.o 00:02:00.079 LIB libspdk_env_dpdk.a 00:02:00.079 SO libspdk_env_dpdk.so.13.0 00:02:00.337 SYMLINK libspdk_env_dpdk.so 00:02:01.270 LIB libspdk_thread.a 00:02:01.271 SO libspdk_thread.so.9.0 00:02:01.529 SYMLINK libspdk_thread.so 00:02:01.529 CC lib/accel/accel.o 00:02:01.529 CC lib/virtio/virtio.o 00:02:01.529 CC lib/virtio/virtio_vhost_user.o 00:02:01.529 CC lib/init/json_config.o 00:02:01.529 CC lib/blob/blobstore.o 00:02:01.529 CC lib/virtio/virtio_vfio_user.o 00:02:01.529 CC lib/accel/accel_rpc.o 00:02:01.529 CC lib/init/subsystem.o 00:02:01.529 CC lib/virtio/virtio_pci.o 00:02:01.529 CC lib/blob/request.o 00:02:01.529 CC lib/accel/accel_sw.o 00:02:01.529 CC lib/blob/zeroes.o 00:02:01.529 CC lib/init/subsystem_rpc.o 00:02:01.529 CC lib/blob/blob_bs_dev.o 00:02:01.529 CC lib/init/rpc.o 00:02:01.786 LIB libspdk_init.a 00:02:01.786 SO libspdk_init.so.4.0 00:02:01.786 LIB libspdk_virtio.a 00:02:01.786 SYMLINK libspdk_init.so 00:02:01.786 SO libspdk_virtio.so.6.0 00:02:02.044 SYMLINK libspdk_virtio.so 00:02:02.044 CC lib/event/app.o 00:02:02.044 CC lib/event/reactor.o 00:02:02.044 CC lib/event/log_rpc.o 00:02:02.044 CC lib/event/app_rpc.o 00:02:02.044 CC lib/event/scheduler_static.o 00:02:02.044 LIB libspdk_nvme.a 00:02:02.302 SO libspdk_nvme.so.12.0 00:02:02.302 LIB libspdk_event.a 00:02:02.302 SO libspdk_event.so.12.0 00:02:02.560 SYMLINK libspdk_event.so 00:02:02.560 LIB libspdk_accel.a 00:02:02.560 SYMLINK libspdk_nvme.so 00:02:02.560 SO libspdk_accel.so.14.0 00:02:02.560 SYMLINK libspdk_accel.so 00:02:02.818 CC lib/bdev/bdev.o 00:02:02.818 CC lib/bdev/bdev_rpc.o 00:02:02.818 CC lib/bdev/bdev_zone.o 00:02:02.818 CC lib/bdev/part.o 00:02:02.818 CC lib/bdev/scsi_nvme.o 00:02:04.190 LIB libspdk_blob.a 00:02:04.448 SO libspdk_blob.so.10.1 00:02:04.448 SYMLINK libspdk_blob.so 00:02:04.448 CC lib/blobfs/blobfs.o 00:02:04.448 CC lib/blobfs/tree.o 00:02:04.448 CC lib/lvol/lvol.o 00:02:05.395 LIB libspdk_bdev.a 00:02:05.395 SO libspdk_bdev.so.14.0 00:02:05.395 LIB libspdk_blobfs.a 00:02:05.395 SO libspdk_blobfs.so.9.0 00:02:05.395 SYMLINK libspdk_blobfs.so 00:02:05.395 SYMLINK libspdk_bdev.so 00:02:05.395 LIB libspdk_lvol.a 00:02:05.395 SO libspdk_lvol.so.9.1 00:02:05.395 SYMLINK libspdk_lvol.so 00:02:05.395 CC lib/ublk/ublk.o 00:02:05.395 CC lib/nbd/nbd.o 00:02:05.395 CC lib/ublk/ublk_rpc.o 00:02:05.395 CC lib/ftl/ftl_core.o 00:02:05.395 CC lib/nbd/nbd_rpc.o 00:02:05.395 CC lib/scsi/dev.o 00:02:05.395 CC lib/nvmf/ctrlr.o 00:02:05.395 CC lib/ftl/ftl_init.o 00:02:05.395 CC lib/scsi/lun.o 00:02:05.395 CC lib/nvmf/ctrlr_discovery.o 00:02:05.395 CC lib/ftl/ftl_layout.o 00:02:05.395 CC lib/scsi/port.o 00:02:05.395 CC lib/nvmf/ctrlr_bdev.o 00:02:05.395 CC lib/scsi/scsi.o 00:02:05.395 CC lib/nvmf/subsystem.o 00:02:05.395 CC lib/ftl/ftl_debug.o 00:02:05.395 CC lib/scsi/scsi_bdev.o 00:02:05.395 CC lib/nvmf/nvmf.o 00:02:05.395 CC lib/ftl/ftl_io.o 00:02:05.395 CC lib/scsi/scsi_pr.o 00:02:05.395 CC lib/nvmf/nvmf_rpc.o 00:02:05.395 CC lib/ftl/ftl_sb.o 00:02:05.395 CC lib/scsi/scsi_rpc.o 00:02:05.395 CC lib/nvmf/transport.o 00:02:05.395 CC lib/scsi/task.o 00:02:05.395 CC lib/nvmf/tcp.o 00:02:05.395 CC lib/ftl/ftl_l2p.o 00:02:05.395 CC lib/ftl/ftl_l2p_flat.o 00:02:05.395 CC lib/nvmf/rdma.o 00:02:05.395 CC lib/ftl/ftl_nv_cache.o 00:02:05.395 CC lib/ftl/ftl_band.o 00:02:05.395 CC lib/ftl/ftl_band_ops.o 00:02:05.395 CC lib/ftl/ftl_writer.o 00:02:05.395 CC lib/ftl/ftl_rq.o 00:02:05.395 CC lib/ftl/ftl_reloc.o 00:02:05.395 CC lib/ftl/ftl_l2p_cache.o 00:02:05.395 CC lib/ftl/ftl_p2l.o 00:02:05.395 CC lib/ftl/mngt/ftl_mngt.o 00:02:05.395 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:05.395 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:05.395 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:05.395 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:05.396 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:05.396 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:05.396 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:05.396 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:05.396 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:05.396 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:05.654 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:05.654 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:05.654 CC lib/ftl/utils/ftl_conf.o 00:02:05.913 CC lib/ftl/utils/ftl_md.o 00:02:05.913 CC lib/ftl/utils/ftl_mempool.o 00:02:05.913 CC lib/ftl/utils/ftl_bitmap.o 00:02:05.913 CC lib/ftl/utils/ftl_property.o 00:02:05.913 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:05.913 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:05.913 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:05.913 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:05.913 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:05.913 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:05.913 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:05.913 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:05.913 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:05.913 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:05.913 CC lib/ftl/base/ftl_base_dev.o 00:02:05.913 CC lib/ftl/base/ftl_base_bdev.o 00:02:05.913 CC lib/ftl/ftl_trace.o 00:02:06.172 LIB libspdk_nbd.a 00:02:06.172 SO libspdk_nbd.so.6.0 00:02:06.172 SYMLINK libspdk_nbd.so 00:02:06.431 LIB libspdk_scsi.a 00:02:06.431 SO libspdk_scsi.so.8.0 00:02:06.431 LIB libspdk_ublk.a 00:02:06.431 SO libspdk_ublk.so.2.0 00:02:06.431 SYMLINK libspdk_scsi.so 00:02:06.431 SYMLINK libspdk_ublk.so 00:02:06.431 CC lib/iscsi/conn.o 00:02:06.431 CC lib/vhost/vhost.o 00:02:06.431 CC lib/vhost/vhost_rpc.o 00:02:06.431 CC lib/iscsi/init_grp.o 00:02:06.431 CC lib/iscsi/iscsi.o 00:02:06.431 CC lib/vhost/vhost_scsi.o 00:02:06.431 CC lib/iscsi/md5.o 00:02:06.431 CC lib/iscsi/param.o 00:02:06.431 CC lib/vhost/vhost_blk.o 00:02:06.431 CC lib/iscsi/portal_grp.o 00:02:06.431 CC lib/vhost/rte_vhost_user.o 00:02:06.431 CC lib/iscsi/tgt_node.o 00:02:06.431 CC lib/iscsi/iscsi_subsystem.o 00:02:06.431 CC lib/iscsi/iscsi_rpc.o 00:02:06.431 CC lib/iscsi/task.o 00:02:06.689 LIB libspdk_ftl.a 00:02:06.947 SO libspdk_ftl.so.8.0 00:02:07.204 SYMLINK libspdk_ftl.so 00:02:07.769 LIB libspdk_vhost.a 00:02:07.769 SO libspdk_vhost.so.7.1 00:02:07.769 LIB libspdk_nvmf.a 00:02:08.028 SYMLINK libspdk_vhost.so 00:02:08.028 SO libspdk_nvmf.so.17.0 00:02:08.028 LIB libspdk_iscsi.a 00:02:08.028 SO libspdk_iscsi.so.7.0 00:02:08.028 SYMLINK libspdk_nvmf.so 00:02:08.287 SYMLINK libspdk_iscsi.so 00:02:08.287 CC module/env_dpdk/env_dpdk_rpc.o 00:02:08.287 CC module/accel/ioat/accel_ioat.o 00:02:08.287 CC module/sock/posix/posix.o 00:02:08.287 CC module/accel/error/accel_error.o 00:02:08.287 CC module/accel/ioat/accel_ioat_rpc.o 00:02:08.287 CC module/accel/dsa/accel_dsa.o 00:02:08.287 CC module/accel/error/accel_error_rpc.o 00:02:08.287 CC module/accel/dsa/accel_dsa_rpc.o 00:02:08.287 CC module/blob/bdev/blob_bdev.o 00:02:08.287 CC module/scheduler/gscheduler/gscheduler.o 00:02:08.287 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:08.287 CC module/accel/iaa/accel_iaa.o 00:02:08.287 CC module/accel/iaa/accel_iaa_rpc.o 00:02:08.287 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:08.544 LIB libspdk_env_dpdk_rpc.a 00:02:08.544 SO libspdk_env_dpdk_rpc.so.5.0 00:02:08.544 LIB libspdk_scheduler_gscheduler.a 00:02:08.544 LIB libspdk_scheduler_dpdk_governor.a 00:02:08.544 SYMLINK libspdk_env_dpdk_rpc.so 00:02:08.544 SO libspdk_scheduler_gscheduler.so.3.0 00:02:08.544 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:08.544 LIB libspdk_accel_error.a 00:02:08.544 LIB libspdk_accel_ioat.a 00:02:08.544 LIB libspdk_scheduler_dynamic.a 00:02:08.544 LIB libspdk_accel_iaa.a 00:02:08.544 SO libspdk_accel_error.so.1.0 00:02:08.544 SO libspdk_accel_ioat.so.5.0 00:02:08.544 SO libspdk_scheduler_dynamic.so.3.0 00:02:08.544 SYMLINK libspdk_scheduler_gscheduler.so 00:02:08.544 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:08.544 SO libspdk_accel_iaa.so.2.0 00:02:08.544 LIB libspdk_accel_dsa.a 00:02:08.802 SYMLINK libspdk_accel_error.so 00:02:08.802 SYMLINK libspdk_accel_ioat.so 00:02:08.802 LIB libspdk_blob_bdev.a 00:02:08.802 SYMLINK libspdk_scheduler_dynamic.so 00:02:08.802 SO libspdk_accel_dsa.so.4.0 00:02:08.802 SYMLINK libspdk_accel_iaa.so 00:02:08.802 SO libspdk_blob_bdev.so.10.1 00:02:08.802 SYMLINK libspdk_accel_dsa.so 00:02:08.802 SYMLINK libspdk_blob_bdev.so 00:02:08.802 CC module/bdev/gpt/gpt.o 00:02:08.802 CC module/bdev/nvme/bdev_nvme.o 00:02:08.802 CC module/bdev/raid/bdev_raid.o 00:02:08.802 CC module/bdev/split/vbdev_split.o 00:02:08.802 CC module/bdev/passthru/vbdev_passthru.o 00:02:08.802 CC module/bdev/split/vbdev_split_rpc.o 00:02:08.802 CC module/bdev/null/bdev_null.o 00:02:08.802 CC module/bdev/malloc/bdev_malloc.o 00:02:08.802 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.802 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:08.802 CC module/bdev/ftl/bdev_ftl.o 00:02:08.802 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:08.802 CC module/bdev/gpt/vbdev_gpt.o 00:02:08.802 CC module/bdev/raid/bdev_raid_rpc.o 00:02:08.802 CC module/bdev/null/bdev_null_rpc.o 00:02:08.802 CC module/bdev/nvme/nvme_rpc.o 00:02:08.802 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:08.802 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.802 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.802 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.802 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.802 CC module/bdev/lvol/vbdev_lvol.o 00:02:08.802 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.802 CC module/bdev/delay/vbdev_delay.o 00:02:08.802 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:08.802 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.802 CC module/bdev/error/vbdev_error.o 00:02:08.802 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:08.802 CC module/bdev/raid/bdev_raid_sb.o 00:02:08.802 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.802 CC module/bdev/error/vbdev_error_rpc.o 00:02:08.802 CC module/bdev/nvme/bdev_mdns_client.o 00:02:08.802 CC module/bdev/raid/raid0.o 00:02:08.802 CC module/bdev/nvme/vbdev_opal.o 00:02:08.802 CC module/bdev/raid/raid1.o 00:02:08.802 CC module/bdev/raid/concat.o 00:02:08.802 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:08.803 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:08.803 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.803 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.803 CC module/bdev/aio/bdev_aio.o 00:02:08.803 CC module/bdev/aio/bdev_aio_rpc.o 00:02:09.368 LIB libspdk_sock_posix.a 00:02:09.368 SO libspdk_sock_posix.so.5.0 00:02:09.368 LIB libspdk_blobfs_bdev.a 00:02:09.368 SO libspdk_blobfs_bdev.so.5.0 00:02:09.368 LIB libspdk_bdev_split.a 00:02:09.368 LIB libspdk_bdev_ftl.a 00:02:09.368 SYMLINK libspdk_sock_posix.so 00:02:09.368 SO libspdk_bdev_split.so.5.0 00:02:09.368 SO libspdk_bdev_ftl.so.5.0 00:02:09.368 SYMLINK libspdk_blobfs_bdev.so 00:02:09.368 LIB libspdk_bdev_gpt.a 00:02:09.368 SYMLINK libspdk_bdev_split.so 00:02:09.368 SYMLINK libspdk_bdev_ftl.so 00:02:09.368 LIB libspdk_bdev_null.a 00:02:09.368 SO libspdk_bdev_gpt.so.5.0 00:02:09.368 LIB libspdk_bdev_passthru.a 00:02:09.368 LIB libspdk_bdev_error.a 00:02:09.368 SO libspdk_bdev_null.so.5.0 00:02:09.368 SO libspdk_bdev_passthru.so.5.0 00:02:09.368 SO libspdk_bdev_error.so.5.0 00:02:09.368 LIB libspdk_bdev_aio.a 00:02:09.368 SYMLINK libspdk_bdev_gpt.so 00:02:09.368 SO libspdk_bdev_aio.so.5.0 00:02:09.368 LIB libspdk_bdev_zone_block.a 00:02:09.368 LIB libspdk_bdev_iscsi.a 00:02:09.368 SYMLINK libspdk_bdev_null.so 00:02:09.368 SYMLINK libspdk_bdev_passthru.so 00:02:09.368 LIB libspdk_bdev_malloc.a 00:02:09.627 SYMLINK libspdk_bdev_error.so 00:02:09.627 SO libspdk_bdev_zone_block.so.5.0 00:02:09.627 SO libspdk_bdev_iscsi.so.5.0 00:02:09.627 LIB libspdk_bdev_delay.a 00:02:09.627 SO libspdk_bdev_malloc.so.5.0 00:02:09.627 SYMLINK libspdk_bdev_aio.so 00:02:09.627 SO libspdk_bdev_delay.so.5.0 00:02:09.627 SYMLINK libspdk_bdev_zone_block.so 00:02:09.627 SYMLINK libspdk_bdev_iscsi.so 00:02:09.627 SYMLINK libspdk_bdev_malloc.so 00:02:09.627 LIB libspdk_bdev_lvol.a 00:02:09.627 SYMLINK libspdk_bdev_delay.so 00:02:09.627 LIB libspdk_bdev_virtio.a 00:02:09.627 SO libspdk_bdev_lvol.so.5.0 00:02:09.627 SO libspdk_bdev_virtio.so.5.0 00:02:09.627 SYMLINK libspdk_bdev_lvol.so 00:02:09.627 SYMLINK libspdk_bdev_virtio.so 00:02:09.884 LIB libspdk_bdev_raid.a 00:02:10.142 SO libspdk_bdev_raid.so.5.0 00:02:10.142 SYMLINK libspdk_bdev_raid.so 00:02:11.079 LIB libspdk_bdev_nvme.a 00:02:11.338 SO libspdk_bdev_nvme.so.6.0 00:02:11.338 SYMLINK libspdk_bdev_nvme.so 00:02:11.596 CC module/event/subsystems/iobuf/iobuf.o 00:02:11.596 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:11.596 CC module/event/subsystems/vmd/vmd.o 00:02:11.596 CC module/event/subsystems/sock/sock.o 00:02:11.596 CC module/event/subsystems/scheduler/scheduler.o 00:02:11.596 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:11.596 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:11.596 LIB libspdk_event_sock.a 00:02:11.596 LIB libspdk_event_vhost_blk.a 00:02:11.596 LIB libspdk_event_scheduler.a 00:02:11.596 LIB libspdk_event_vmd.a 00:02:11.596 LIB libspdk_event_iobuf.a 00:02:11.596 SO libspdk_event_sock.so.4.0 00:02:11.596 SO libspdk_event_vhost_blk.so.2.0 00:02:11.854 SO libspdk_event_scheduler.so.3.0 00:02:11.854 SO libspdk_event_vmd.so.5.0 00:02:11.854 SO libspdk_event_iobuf.so.2.0 00:02:11.854 SYMLINK libspdk_event_sock.so 00:02:11.854 SYMLINK libspdk_event_vhost_blk.so 00:02:11.854 SYMLINK libspdk_event_scheduler.so 00:02:11.854 SYMLINK libspdk_event_vmd.so 00:02:11.854 SYMLINK libspdk_event_iobuf.so 00:02:11.854 CC module/event/subsystems/accel/accel.o 00:02:12.112 LIB libspdk_event_accel.a 00:02:12.112 SO libspdk_event_accel.so.5.0 00:02:12.112 SYMLINK libspdk_event_accel.so 00:02:12.370 CC module/event/subsystems/bdev/bdev.o 00:02:12.370 LIB libspdk_event_bdev.a 00:02:12.370 SO libspdk_event_bdev.so.5.0 00:02:12.634 SYMLINK libspdk_event_bdev.so 00:02:12.634 CC module/event/subsystems/nbd/nbd.o 00:02:12.634 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:12.634 CC module/event/subsystems/scsi/scsi.o 00:02:12.634 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:12.634 CC module/event/subsystems/ublk/ublk.o 00:02:12.634 LIB libspdk_event_nbd.a 00:02:12.634 LIB libspdk_event_ublk.a 00:02:12.935 LIB libspdk_event_scsi.a 00:02:12.935 SO libspdk_event_ublk.so.2.0 00:02:12.935 SO libspdk_event_nbd.so.5.0 00:02:12.935 SO libspdk_event_scsi.so.5.0 00:02:12.935 SYMLINK libspdk_event_ublk.so 00:02:12.935 SYMLINK libspdk_event_nbd.so 00:02:12.935 SYMLINK libspdk_event_scsi.so 00:02:12.935 LIB libspdk_event_nvmf.a 00:02:12.935 SO libspdk_event_nvmf.so.5.0 00:02:12.935 SYMLINK libspdk_event_nvmf.so 00:02:12.935 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.935 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:13.192 LIB libspdk_event_vhost_scsi.a 00:02:13.192 LIB libspdk_event_iscsi.a 00:02:13.192 SO libspdk_event_vhost_scsi.so.2.0 00:02:13.192 SO libspdk_event_iscsi.so.5.0 00:02:13.192 SYMLINK libspdk_event_vhost_scsi.so 00:02:13.192 SYMLINK libspdk_event_iscsi.so 00:02:13.192 SO libspdk.so.5.0 00:02:13.192 SYMLINK libspdk.so 00:02:13.457 CC app/spdk_nvme_identify/identify.o 00:02:13.457 CC app/trace_record/trace_record.o 00:02:13.457 CC app/spdk_lspci/spdk_lspci.o 00:02:13.457 CXX app/trace/trace.o 00:02:13.457 CC app/spdk_nvme_discover/discovery_aer.o 00:02:13.457 CC app/spdk_nvme_perf/perf.o 00:02:13.457 CC app/spdk_top/spdk_top.o 00:02:13.457 TEST_HEADER include/spdk/accel.h 00:02:13.457 TEST_HEADER include/spdk/accel_module.h 00:02:13.457 TEST_HEADER include/spdk/assert.h 00:02:13.457 TEST_HEADER include/spdk/barrier.h 00:02:13.457 CC test/rpc_client/rpc_client_test.o 00:02:13.457 TEST_HEADER include/spdk/base64.h 00:02:13.457 TEST_HEADER include/spdk/bdev.h 00:02:13.457 TEST_HEADER include/spdk/bdev_module.h 00:02:13.457 TEST_HEADER include/spdk/bdev_zone.h 00:02:13.457 TEST_HEADER include/spdk/bit_array.h 00:02:13.457 TEST_HEADER include/spdk/bit_pool.h 00:02:13.457 TEST_HEADER include/spdk/blob_bdev.h 00:02:13.457 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:13.457 TEST_HEADER include/spdk/blobfs.h 00:02:13.457 TEST_HEADER include/spdk/blob.h 00:02:13.457 TEST_HEADER include/spdk/conf.h 00:02:13.457 TEST_HEADER include/spdk/config.h 00:02:13.457 TEST_HEADER include/spdk/cpuset.h 00:02:13.457 TEST_HEADER include/spdk/crc16.h 00:02:13.457 CC app/spdk_dd/spdk_dd.o 00:02:13.457 TEST_HEADER include/spdk/crc32.h 00:02:13.457 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:13.457 TEST_HEADER include/spdk/crc64.h 00:02:13.457 TEST_HEADER include/spdk/dif.h 00:02:13.457 CC app/nvmf_tgt/nvmf_main.o 00:02:13.457 TEST_HEADER include/spdk/dma.h 00:02:13.457 TEST_HEADER include/spdk/endian.h 00:02:13.457 TEST_HEADER include/spdk/env_dpdk.h 00:02:13.457 CC app/iscsi_tgt/iscsi_tgt.o 00:02:13.457 TEST_HEADER include/spdk/env.h 00:02:13.457 TEST_HEADER include/spdk/event.h 00:02:13.457 CC app/vhost/vhost.o 00:02:13.457 CC examples/util/zipf/zipf.o 00:02:13.457 TEST_HEADER include/spdk/fd_group.h 00:02:13.457 CC examples/nvme/reconnect/reconnect.o 00:02:13.457 CC examples/vmd/lsvmd/lsvmd.o 00:02:13.457 TEST_HEADER include/spdk/fd.h 00:02:13.457 CC test/app/histogram_perf/histogram_perf.o 00:02:13.457 CC examples/nvme/hello_world/hello_world.o 00:02:13.457 CC examples/idxd/perf/perf.o 00:02:13.457 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.457 TEST_HEADER include/spdk/file.h 00:02:13.457 CC examples/ioat/perf/perf.o 00:02:13.457 CC examples/vmd/led/led.o 00:02:13.457 TEST_HEADER include/spdk/ftl.h 00:02:13.457 CC examples/ioat/verify/verify.o 00:02:13.457 TEST_HEADER include/spdk/gpt_spec.h 00:02:13.457 TEST_HEADER include/spdk/hexlify.h 00:02:13.457 CC app/fio/nvme/fio_plugin.o 00:02:13.457 CC examples/nvme/hotplug/hotplug.o 00:02:13.457 CC examples/sock/hello_world/hello_sock.o 00:02:13.457 CC test/nvme/aer/aer.o 00:02:13.457 CC test/event/event_perf/event_perf.o 00:02:13.457 TEST_HEADER include/spdk/histogram_data.h 00:02:13.457 CC examples/accel/perf/accel_perf.o 00:02:13.457 CC examples/nvme/arbitration/arbitration.o 00:02:13.457 TEST_HEADER include/spdk/idxd.h 00:02:13.457 TEST_HEADER include/spdk/idxd_spec.h 00:02:13.457 CC app/spdk_tgt/spdk_tgt.o 00:02:13.457 TEST_HEADER include/spdk/init.h 00:02:13.457 CC test/thread/poller_perf/poller_perf.o 00:02:13.457 TEST_HEADER include/spdk/ioat.h 00:02:13.457 TEST_HEADER include/spdk/ioat_spec.h 00:02:13.457 TEST_HEADER include/spdk/iscsi_spec.h 00:02:13.457 TEST_HEADER include/spdk/json.h 00:02:13.457 TEST_HEADER include/spdk/jsonrpc.h 00:02:13.457 TEST_HEADER include/spdk/likely.h 00:02:13.457 TEST_HEADER include/spdk/log.h 00:02:13.457 CC examples/blob/cli/blobcli.o 00:02:13.457 CC test/accel/dif/dif.o 00:02:13.457 TEST_HEADER include/spdk/lvol.h 00:02:13.457 TEST_HEADER include/spdk/memory.h 00:02:13.457 CC examples/blob/hello_world/hello_blob.o 00:02:13.457 TEST_HEADER include/spdk/mmio.h 00:02:13.716 TEST_HEADER include/spdk/nbd.h 00:02:13.716 TEST_HEADER include/spdk/notify.h 00:02:13.716 CC examples/bdev/hello_world/hello_bdev.o 00:02:13.716 CC examples/thread/thread/thread_ex.o 00:02:13.716 CC test/dma/test_dma/test_dma.o 00:02:13.716 TEST_HEADER include/spdk/nvme.h 00:02:13.716 CC test/blobfs/mkfs/mkfs.o 00:02:13.716 TEST_HEADER include/spdk/nvme_intel.h 00:02:13.716 CC examples/nvmf/nvmf/nvmf.o 00:02:13.716 CC test/bdev/bdevio/bdevio.o 00:02:13.716 CC examples/bdev/bdevperf/bdevperf.o 00:02:13.716 CC app/fio/bdev/fio_plugin.o 00:02:13.716 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:13.716 CC test/app/bdev_svc/bdev_svc.o 00:02:13.716 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:13.716 TEST_HEADER include/spdk/nvme_spec.h 00:02:13.716 TEST_HEADER include/spdk/nvme_zns.h 00:02:13.716 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:13.716 CC test/lvol/esnap/esnap.o 00:02:13.716 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:13.716 TEST_HEADER include/spdk/nvmf.h 00:02:13.716 TEST_HEADER include/spdk/nvmf_spec.h 00:02:13.716 TEST_HEADER include/spdk/nvmf_transport.h 00:02:13.716 TEST_HEADER include/spdk/opal.h 00:02:13.716 TEST_HEADER include/spdk/opal_spec.h 00:02:13.716 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:13.716 TEST_HEADER include/spdk/pci_ids.h 00:02:13.716 CC test/env/mem_callbacks/mem_callbacks.o 00:02:13.716 TEST_HEADER include/spdk/pipe.h 00:02:13.716 TEST_HEADER include/spdk/queue.h 00:02:13.716 TEST_HEADER include/spdk/reduce.h 00:02:13.716 TEST_HEADER include/spdk/rpc.h 00:02:13.716 TEST_HEADER include/spdk/scheduler.h 00:02:13.716 TEST_HEADER include/spdk/scsi.h 00:02:13.716 TEST_HEADER include/spdk/scsi_spec.h 00:02:13.716 TEST_HEADER include/spdk/sock.h 00:02:13.716 TEST_HEADER include/spdk/stdinc.h 00:02:13.716 TEST_HEADER include/spdk/string.h 00:02:13.716 TEST_HEADER include/spdk/thread.h 00:02:13.716 TEST_HEADER include/spdk/trace.h 00:02:13.716 TEST_HEADER include/spdk/trace_parser.h 00:02:13.716 LINK spdk_lspci 00:02:13.716 TEST_HEADER include/spdk/tree.h 00:02:13.716 TEST_HEADER include/spdk/ublk.h 00:02:13.716 TEST_HEADER include/spdk/util.h 00:02:13.716 TEST_HEADER include/spdk/uuid.h 00:02:13.716 TEST_HEADER include/spdk/version.h 00:02:13.716 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:13.716 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:13.716 TEST_HEADER include/spdk/vhost.h 00:02:13.716 TEST_HEADER include/spdk/vmd.h 00:02:13.716 TEST_HEADER include/spdk/xor.h 00:02:13.716 TEST_HEADER include/spdk/zipf.h 00:02:13.716 CXX test/cpp_headers/accel.o 00:02:13.716 LINK lsvmd 00:02:13.716 LINK histogram_perf 00:02:13.716 LINK rpc_client_test 00:02:13.716 LINK zipf 00:02:13.716 LINK led 00:02:13.716 LINK spdk_nvme_discover 00:02:13.982 LINK event_perf 00:02:13.982 LINK interrupt_tgt 00:02:13.982 LINK nvmf_tgt 00:02:13.982 LINK poller_perf 00:02:13.982 LINK vhost 00:02:13.982 LINK spdk_trace_record 00:02:13.982 LINK iscsi_tgt 00:02:13.982 LINK ioat_perf 00:02:13.982 LINK spdk_tgt 00:02:13.982 LINK verify 00:02:13.982 LINK bdev_svc 00:02:13.982 LINK hello_world 00:02:13.982 LINK mkfs 00:02:13.982 LINK hello_sock 00:02:13.982 LINK hotplug 00:02:13.982 LINK hello_bdev 00:02:13.982 LINK hello_blob 00:02:13.982 LINK thread 00:02:13.982 LINK aer 00:02:13.982 CXX test/cpp_headers/accel_module.o 00:02:13.982 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:14.248 LINK nvmf 00:02:14.248 LINK arbitration 00:02:14.248 LINK idxd_perf 00:02:14.248 LINK reconnect 00:02:14.248 LINK spdk_dd 00:02:14.248 CC test/event/reactor/reactor.o 00:02:14.248 CC test/env/vtophys/vtophys.o 00:02:14.248 CC test/nvme/reset/reset.o 00:02:14.248 CXX test/cpp_headers/assert.o 00:02:14.248 CC test/nvme/sgl/sgl.o 00:02:14.248 CXX test/cpp_headers/barrier.o 00:02:14.248 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:14.248 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:14.248 CXX test/cpp_headers/base64.o 00:02:14.248 CC examples/nvme/abort/abort.o 00:02:14.248 LINK spdk_trace 00:02:14.248 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:14.248 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:14.248 LINK dif 00:02:14.248 LINK bdevio 00:02:14.248 LINK test_dma 00:02:14.248 CC test/app/jsoncat/jsoncat.o 00:02:14.248 CXX test/cpp_headers/bdev.o 00:02:14.512 LINK accel_perf 00:02:14.512 CC test/nvme/e2edp/nvme_dp.o 00:02:14.512 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:14.512 LINK nvme_fuzz 00:02:14.512 CC test/app/stub/stub.o 00:02:14.512 CC test/nvme/overhead/overhead.o 00:02:14.512 CXX test/cpp_headers/bdev_module.o 00:02:14.512 CXX test/cpp_headers/bdev_zone.o 00:02:14.512 LINK cmb_copy 00:02:14.512 CC test/event/reactor_perf/reactor_perf.o 00:02:14.512 CXX test/cpp_headers/bit_array.o 00:02:14.512 CXX test/cpp_headers/bit_pool.o 00:02:14.512 CC test/env/memory/memory_ut.o 00:02:14.512 CXX test/cpp_headers/blob_bdev.o 00:02:14.512 CC test/nvme/err_injection/err_injection.o 00:02:14.512 LINK nvme_manage 00:02:14.512 LINK reactor 00:02:14.512 LINK vtophys 00:02:14.512 CC test/env/pci/pci_ut.o 00:02:14.512 LINK spdk_bdev 00:02:14.512 LINK blobcli 00:02:14.512 LINK spdk_nvme 00:02:14.512 CXX test/cpp_headers/blobfs_bdev.o 00:02:14.512 CC test/nvme/startup/startup.o 00:02:14.512 CXX test/cpp_headers/blobfs.o 00:02:14.512 CC test/event/app_repeat/app_repeat.o 00:02:14.512 LINK jsoncat 00:02:14.774 LINK pmr_persistence 00:02:14.774 CC test/nvme/reserve/reserve.o 00:02:14.774 CC test/nvme/simple_copy/simple_copy.o 00:02:14.774 CC test/nvme/boot_partition/boot_partition.o 00:02:14.774 LINK reset 00:02:14.774 CC test/nvme/connect_stress/connect_stress.o 00:02:14.774 LINK env_dpdk_post_init 00:02:14.774 CC test/event/scheduler/scheduler.o 00:02:14.774 CXX test/cpp_headers/blob.o 00:02:14.774 LINK reactor_perf 00:02:14.774 LINK sgl 00:02:14.774 CXX test/cpp_headers/conf.o 00:02:14.774 CXX test/cpp_headers/config.o 00:02:14.774 CXX test/cpp_headers/cpuset.o 00:02:14.774 LINK stub 00:02:14.774 CC test/nvme/compliance/nvme_compliance.o 00:02:14.774 CXX test/cpp_headers/crc16.o 00:02:14.774 CXX test/cpp_headers/crc32.o 00:02:14.774 CXX test/cpp_headers/crc64.o 00:02:14.774 CXX test/cpp_headers/dif.o 00:02:14.774 CC test/nvme/fused_ordering/fused_ordering.o 00:02:14.774 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:14.774 CXX test/cpp_headers/dma.o 00:02:14.774 CC test/nvme/fdp/fdp.o 00:02:14.774 CXX test/cpp_headers/endian.o 00:02:14.774 CC test/nvme/cuse/cuse.o 00:02:14.774 CXX test/cpp_headers/env_dpdk.o 00:02:14.774 LINK mem_callbacks 00:02:14.774 CXX test/cpp_headers/env.o 00:02:14.774 LINK err_injection 00:02:14.774 CXX test/cpp_headers/event.o 00:02:15.039 CXX test/cpp_headers/fd_group.o 00:02:15.039 LINK startup 00:02:15.039 LINK app_repeat 00:02:15.039 CXX test/cpp_headers/fd.o 00:02:15.039 LINK nvme_dp 00:02:15.039 LINK spdk_nvme_perf 00:02:15.039 CXX test/cpp_headers/file.o 00:02:15.040 LINK abort 00:02:15.040 CXX test/cpp_headers/ftl.o 00:02:15.040 CXX test/cpp_headers/gpt_spec.o 00:02:15.040 LINK overhead 00:02:15.040 CXX test/cpp_headers/hexlify.o 00:02:15.040 LINK spdk_nvme_identify 00:02:15.040 CXX test/cpp_headers/histogram_data.o 00:02:15.040 CXX test/cpp_headers/idxd.o 00:02:15.040 LINK vhost_fuzz 00:02:15.040 LINK bdevperf 00:02:15.040 CXX test/cpp_headers/idxd_spec.o 00:02:15.040 LINK boot_partition 00:02:15.040 LINK spdk_top 00:02:15.040 LINK reserve 00:02:15.040 CXX test/cpp_headers/init.o 00:02:15.040 LINK connect_stress 00:02:15.040 CXX test/cpp_headers/ioat.o 00:02:15.040 CXX test/cpp_headers/ioat_spec.o 00:02:15.040 CXX test/cpp_headers/iscsi_spec.o 00:02:15.040 CXX test/cpp_headers/json.o 00:02:15.040 LINK simple_copy 00:02:15.301 CXX test/cpp_headers/jsonrpc.o 00:02:15.301 CXX test/cpp_headers/likely.o 00:02:15.301 CXX test/cpp_headers/log.o 00:02:15.301 LINK scheduler 00:02:15.301 CXX test/cpp_headers/lvol.o 00:02:15.301 LINK doorbell_aers 00:02:15.301 LINK fused_ordering 00:02:15.301 CXX test/cpp_headers/memory.o 00:02:15.301 CXX test/cpp_headers/mmio.o 00:02:15.301 CXX test/cpp_headers/nbd.o 00:02:15.301 CXX test/cpp_headers/notify.o 00:02:15.301 CXX test/cpp_headers/nvme.o 00:02:15.301 CXX test/cpp_headers/nvme_intel.o 00:02:15.301 CXX test/cpp_headers/nvme_ocssd.o 00:02:15.301 LINK pci_ut 00:02:15.301 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:15.301 CXX test/cpp_headers/nvme_spec.o 00:02:15.301 CXX test/cpp_headers/nvme_zns.o 00:02:15.301 CXX test/cpp_headers/nvmf_cmd.o 00:02:15.301 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:15.301 CXX test/cpp_headers/nvmf.o 00:02:15.301 CXX test/cpp_headers/nvmf_spec.o 00:02:15.302 CXX test/cpp_headers/nvmf_transport.o 00:02:15.302 CXX test/cpp_headers/opal.o 00:02:15.302 CXX test/cpp_headers/opal_spec.o 00:02:15.302 CXX test/cpp_headers/pci_ids.o 00:02:15.302 CXX test/cpp_headers/pipe.o 00:02:15.302 CXX test/cpp_headers/queue.o 00:02:15.302 CXX test/cpp_headers/reduce.o 00:02:15.302 CXX test/cpp_headers/rpc.o 00:02:15.302 CXX test/cpp_headers/scheduler.o 00:02:15.302 CXX test/cpp_headers/scsi.o 00:02:15.302 CXX test/cpp_headers/scsi_spec.o 00:02:15.302 CXX test/cpp_headers/sock.o 00:02:15.302 CXX test/cpp_headers/stdinc.o 00:02:15.302 LINK nvme_compliance 00:02:15.302 CXX test/cpp_headers/string.o 00:02:15.302 CXX test/cpp_headers/thread.o 00:02:15.565 CXX test/cpp_headers/trace.o 00:02:15.565 LINK fdp 00:02:15.565 CXX test/cpp_headers/trace_parser.o 00:02:15.565 CXX test/cpp_headers/tree.o 00:02:15.565 CXX test/cpp_headers/ublk.o 00:02:15.565 CXX test/cpp_headers/util.o 00:02:15.565 CXX test/cpp_headers/uuid.o 00:02:15.565 CXX test/cpp_headers/version.o 00:02:15.565 CXX test/cpp_headers/vfio_user_pci.o 00:02:15.565 CXX test/cpp_headers/vfio_user_spec.o 00:02:15.565 CXX test/cpp_headers/vhost.o 00:02:15.565 CXX test/cpp_headers/vmd.o 00:02:15.565 CXX test/cpp_headers/xor.o 00:02:15.565 CXX test/cpp_headers/zipf.o 00:02:16.132 LINK memory_ut 00:02:16.390 LINK cuse 00:02:16.649 LINK iscsi_fuzz 00:02:19.184 LINK esnap 00:02:19.184 00:02:19.184 real 0m45.078s 00:02:19.184 user 9m37.845s 00:02:19.184 sys 2m8.056s 00:02:19.184 01:10:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:19.184 01:10:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.184 ************************************ 00:02:19.184 END TEST make 00:02:19.184 ************************************ 00:02:19.184 01:10:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:19.184 01:10:10 -- nvmf/common.sh@7 -- # uname -s 00:02:19.184 01:10:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:19.184 01:10:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:19.184 01:10:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:19.184 01:10:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:19.184 01:10:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:19.184 01:10:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:19.184 01:10:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:19.184 01:10:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:19.184 01:10:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:19.184 01:10:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:19.184 01:10:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:19.184 01:10:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:19.184 01:10:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:19.184 01:10:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:19.184 01:10:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:19.184 01:10:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:19.443 01:10:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:19.443 01:10:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.443 01:10:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.443 01:10:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.443 01:10:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.443 01:10:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.443 01:10:10 -- paths/export.sh@5 -- # export PATH 00:02:19.443 01:10:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.443 01:10:10 -- nvmf/common.sh@46 -- # : 0 00:02:19.443 01:10:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:19.443 01:10:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:19.443 01:10:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:19.443 01:10:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:19.443 01:10:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:19.443 01:10:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:19.443 01:10:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:19.443 01:10:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:19.443 01:10:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:19.443 01:10:10 -- spdk/autotest.sh@32 -- # uname -s 00:02:19.443 01:10:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:19.443 01:10:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:19.443 01:10:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.443 01:10:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:19.443 01:10:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.443 01:10:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:19.443 01:10:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:19.443 01:10:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:19.443 01:10:10 -- spdk/autotest.sh@48 -- # udevadm_pid=480904 00:02:19.443 01:10:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:19.443 01:10:10 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:19.443 01:10:10 -- spdk/autotest.sh@54 -- # echo 480906 00:02:19.443 01:10:10 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:19.443 01:10:10 -- spdk/autotest.sh@56 -- # echo 480907 00:02:19.443 01:10:10 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:19.443 01:10:10 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:19.443 01:10:10 -- spdk/autotest.sh@60 -- # echo 480908 00:02:19.443 01:10:10 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:19.443 01:10:10 -- spdk/autotest.sh@62 -- # echo 480909 00:02:19.443 01:10:10 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:19.443 01:10:10 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:19.443 01:10:10 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:19.443 01:10:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:19.443 01:10:10 -- common/autotest_common.sh@10 -- # set +x 00:02:19.443 01:10:10 -- spdk/autotest.sh@70 -- # create_test_list 00:02:19.443 01:10:10 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:19.443 01:10:10 -- common/autotest_common.sh@10 -- # set +x 00:02:19.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:19.443 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:19.443 01:10:10 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:19.443 01:10:10 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.443 01:10:10 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.443 01:10:10 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:19.443 01:10:10 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.443 01:10:10 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:19.443 01:10:10 -- common/autotest_common.sh@1440 -- # uname 00:02:19.443 01:10:10 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:19.443 01:10:10 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:19.443 01:10:10 -- common/autotest_common.sh@1460 -- # uname 00:02:19.444 01:10:10 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:19.444 01:10:10 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:19.444 01:10:11 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:19.444 01:10:11 -- spdk/autotest.sh@83 -- # hash lcov 00:02:19.444 01:10:11 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:19.444 01:10:11 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:19.444 --rc lcov_branch_coverage=1 00:02:19.444 --rc lcov_function_coverage=1 00:02:19.444 --rc genhtml_branch_coverage=1 00:02:19.444 --rc genhtml_function_coverage=1 00:02:19.444 --rc genhtml_legend=1 00:02:19.444 --rc geninfo_all_blocks=1 00:02:19.444 ' 00:02:19.444 01:10:11 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:19.444 --rc lcov_branch_coverage=1 00:02:19.444 --rc lcov_function_coverage=1 00:02:19.444 --rc genhtml_branch_coverage=1 00:02:19.444 --rc genhtml_function_coverage=1 00:02:19.444 --rc genhtml_legend=1 00:02:19.444 --rc geninfo_all_blocks=1 00:02:19.444 ' 00:02:19.444 01:10:11 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:19.444 --rc lcov_branch_coverage=1 00:02:19.444 --rc lcov_function_coverage=1 00:02:19.444 --rc genhtml_branch_coverage=1 00:02:19.444 --rc genhtml_function_coverage=1 00:02:19.444 --rc genhtml_legend=1 00:02:19.444 --rc geninfo_all_blocks=1 00:02:19.444 --no-external' 00:02:19.444 01:10:11 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:19.444 --rc lcov_branch_coverage=1 00:02:19.444 --rc lcov_function_coverage=1 00:02:19.444 --rc genhtml_branch_coverage=1 00:02:19.444 --rc genhtml_function_coverage=1 00:02:19.444 --rc genhtml_legend=1 00:02:19.444 --rc geninfo_all_blocks=1 00:02:19.444 --no-external' 00:02:19.444 01:10:11 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:19.444 lcov: LCOV version 1.14 00:02:19.444 01:10:11 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:22.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:22.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:22.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:22.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:22.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:22.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:49.262 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:49.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:51.161 01:10:42 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:51.161 01:10:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:51.161 01:10:42 -- common/autotest_common.sh@10 -- # set +x 00:02:51.161 01:10:42 -- spdk/autotest.sh@102 -- # rm -f 00:02:51.161 01:10:42 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.093 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:52.093 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:52.093 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:52.093 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:52.093 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:52.093 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:52.093 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:52.093 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:52.093 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:52.093 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:52.387 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:52.387 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:52.387 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:52.387 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:52.387 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:52.387 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:52.387 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:52.387 01:10:44 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:52.387 01:10:44 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:52.387 01:10:44 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:52.387 01:10:44 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:52.387 01:10:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:52.387 01:10:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:52.387 01:10:44 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:52.387 01:10:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:52.387 01:10:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:52.387 01:10:44 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:52.387 01:10:44 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:52.387 01:10:44 -- spdk/autotest.sh@121 -- # grep -v p 00:02:52.387 01:10:44 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:52.387 01:10:44 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:52.387 01:10:44 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:52.387 01:10:44 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:52.387 01:10:44 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:52.387 No valid GPT data, bailing 00:02:52.387 01:10:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:52.387 01:10:44 -- scripts/common.sh@393 -- # pt= 00:02:52.387 01:10:44 -- scripts/common.sh@394 -- # return 1 00:02:52.387 01:10:44 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:52.387 1+0 records in 00:02:52.387 1+0 records out 00:02:52.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00263303 s, 398 MB/s 00:02:52.387 01:10:44 -- spdk/autotest.sh@129 -- # sync 00:02:52.387 01:10:44 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:52.387 01:10:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:52.387 01:10:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:54.288 01:10:45 -- spdk/autotest.sh@135 -- # uname -s 00:02:54.288 01:10:45 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:54.288 01:10:45 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:54.288 01:10:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:54.288 01:10:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:54.288 01:10:45 -- common/autotest_common.sh@10 -- # set +x 00:02:54.288 ************************************ 00:02:54.288 START TEST setup.sh 00:02:54.288 ************************************ 00:02:54.288 01:10:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:54.288 * Looking for test storage... 00:02:54.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:54.288 01:10:45 -- setup/test-setup.sh@10 -- # uname -s 00:02:54.288 01:10:45 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:54.288 01:10:45 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:54.288 01:10:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:54.288 01:10:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:54.288 01:10:45 -- common/autotest_common.sh@10 -- # set +x 00:02:54.288 ************************************ 00:02:54.288 START TEST acl 00:02:54.288 ************************************ 00:02:54.288 01:10:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:54.288 * Looking for test storage... 00:02:54.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:54.288 01:10:45 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:54.288 01:10:45 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:54.288 01:10:45 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:54.288 01:10:45 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:54.288 01:10:45 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:54.288 01:10:45 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:54.288 01:10:45 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:54.288 01:10:45 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.288 01:10:45 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:54.288 01:10:45 -- setup/acl.sh@12 -- # devs=() 00:02:54.288 01:10:45 -- setup/acl.sh@12 -- # declare -a devs 00:02:54.288 01:10:45 -- setup/acl.sh@13 -- # drivers=() 00:02:54.288 01:10:45 -- setup/acl.sh@13 -- # declare -A drivers 00:02:54.288 01:10:45 -- setup/acl.sh@51 -- # setup reset 00:02:54.288 01:10:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.288 01:10:45 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.664 01:10:47 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:55.664 01:10:47 -- setup/acl.sh@16 -- # local dev driver 00:02:55.664 01:10:47 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.664 01:10:47 -- setup/acl.sh@15 -- # setup output status 00:02:55.664 01:10:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.664 01:10:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:57.040 Hugepages 00:02:57.040 node hugesize free / total 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # continue 00:02:57.040 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # continue 00:02:57.040 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # continue 00:02:57.040 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.040 00:02:57.040 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # continue 00:02:57.040 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.040 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.040 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.040 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.040 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.040 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.040 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.040 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # continue 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:57.041 01:10:48 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:57.041 01:10:48 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:57.041 01:10:48 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:57.041 01:10:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.041 01:10:48 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:57.041 01:10:48 -- setup/acl.sh@54 -- # run_test denied denied 00:02:57.041 01:10:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:57.041 01:10:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:57.041 01:10:48 -- common/autotest_common.sh@10 -- # set +x 00:02:57.041 ************************************ 00:02:57.041 START TEST denied 00:02:57.041 ************************************ 00:02:57.041 01:10:48 -- common/autotest_common.sh@1104 -- # denied 00:02:57.041 01:10:48 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:57.041 01:10:48 -- setup/acl.sh@38 -- # setup output config 00:02:57.041 01:10:48 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:57.041 01:10:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.041 01:10:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.418 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:58.418 01:10:50 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:58.418 01:10:50 -- setup/acl.sh@28 -- # local dev driver 00:02:58.418 01:10:50 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:58.418 01:10:50 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:58.418 01:10:50 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:58.418 01:10:50 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:58.418 01:10:50 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:58.418 01:10:50 -- setup/acl.sh@41 -- # setup reset 00:02:58.418 01:10:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.418 01:10:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.954 00:03:00.954 real 0m3.893s 00:03:00.954 user 0m1.174s 00:03:00.954 sys 0m1.811s 00:03:00.954 01:10:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.954 01:10:52 -- common/autotest_common.sh@10 -- # set +x 00:03:00.954 ************************************ 00:03:00.954 END TEST denied 00:03:00.954 ************************************ 00:03:00.954 01:10:52 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:00.954 01:10:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:00.954 01:10:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:00.954 01:10:52 -- common/autotest_common.sh@10 -- # set +x 00:03:00.954 ************************************ 00:03:00.954 START TEST allowed 00:03:00.954 ************************************ 00:03:00.954 01:10:52 -- common/autotest_common.sh@1104 -- # allowed 00:03:00.954 01:10:52 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:00.954 01:10:52 -- setup/acl.sh@45 -- # setup output config 00:03:00.954 01:10:52 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:00.954 01:10:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.954 01:10:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:03.486 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:03.486 01:10:54 -- setup/acl.sh@47 -- # verify 00:03:03.486 01:10:54 -- setup/acl.sh@28 -- # local dev driver 00:03:03.486 01:10:54 -- setup/acl.sh@48 -- # setup reset 00:03:03.486 01:10:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.486 01:10:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.864 00:03:04.864 real 0m3.824s 00:03:04.864 user 0m1.011s 00:03:04.864 sys 0m1.670s 00:03:04.864 01:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.864 01:10:56 -- common/autotest_common.sh@10 -- # set +x 00:03:04.864 ************************************ 00:03:04.864 END TEST allowed 00:03:04.864 ************************************ 00:03:04.864 00:03:04.864 real 0m10.445s 00:03:04.864 user 0m3.217s 00:03:04.864 sys 0m5.259s 00:03:04.864 01:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.864 01:10:56 -- common/autotest_common.sh@10 -- # set +x 00:03:04.864 ************************************ 00:03:04.864 END TEST acl 00:03:04.864 ************************************ 00:03:04.864 01:10:56 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.864 01:10:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.864 01:10:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.864 01:10:56 -- common/autotest_common.sh@10 -- # set +x 00:03:04.864 ************************************ 00:03:04.864 START TEST hugepages 00:03:04.864 ************************************ 00:03:04.864 01:10:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.864 * Looking for test storage... 00:03:04.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.864 01:10:56 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:04.864 01:10:56 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:04.864 01:10:56 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:04.864 01:10:56 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:04.864 01:10:56 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:04.864 01:10:56 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:04.864 01:10:56 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:04.864 01:10:56 -- setup/common.sh@18 -- # local node= 00:03:04.864 01:10:56 -- setup/common.sh@19 -- # local var val 00:03:04.864 01:10:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.864 01:10:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.864 01:10:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.864 01:10:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.864 01:10:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.864 01:10:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42718380 kB' 'MemAvailable: 47248460 kB' 'Buffers: 2704 kB' 'Cached: 11213412 kB' 'SwapCached: 0 kB' 'Active: 7205964 kB' 'Inactive: 4516068 kB' 'Active(anon): 6810340 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509264 kB' 'Mapped: 208516 kB' 'Shmem: 6304424 kB' 'KReclaimable: 224456 kB' 'Slab: 597536 kB' 'SReclaimable: 224456 kB' 'SUnreclaim: 373080 kB' 'KernelStack: 12752 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 7890692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.864 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.864 01:10:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # continue 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.865 01:10:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.865 01:10:56 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.865 01:10:56 -- setup/common.sh@33 -- # echo 2048 00:03:04.865 01:10:56 -- setup/common.sh@33 -- # return 0 00:03:04.865 01:10:56 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:04.865 01:10:56 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:04.865 01:10:56 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:04.865 01:10:56 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:04.865 01:10:56 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:04.865 01:10:56 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:04.865 01:10:56 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:04.865 01:10:56 -- setup/hugepages.sh@207 -- # get_nodes 00:03:04.865 01:10:56 -- setup/hugepages.sh@27 -- # local node 00:03:04.865 01:10:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.865 01:10:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:04.865 01:10:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.865 01:10:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.865 01:10:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.865 01:10:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.865 01:10:56 -- setup/hugepages.sh@208 -- # clear_hp 00:03:04.865 01:10:56 -- setup/hugepages.sh@37 -- # local node hp 00:03:04.865 01:10:56 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.865 01:10:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.865 01:10:56 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.865 01:10:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.865 01:10:56 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.865 01:10:56 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.865 01:10:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.865 01:10:56 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.865 01:10:56 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.865 01:10:56 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.865 01:10:56 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:04.865 01:10:56 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:04.865 01:10:56 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:04.865 01:10:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.865 01:10:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.865 01:10:56 -- common/autotest_common.sh@10 -- # set +x 00:03:04.865 ************************************ 00:03:04.865 START TEST default_setup 00:03:04.865 ************************************ 00:03:04.865 01:10:56 -- common/autotest_common.sh@1104 -- # default_setup 00:03:04.865 01:10:56 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:04.865 01:10:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.865 01:10:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:04.865 01:10:56 -- setup/hugepages.sh@51 -- # shift 00:03:04.866 01:10:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:04.866 01:10:56 -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.866 01:10:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.866 01:10:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.866 01:10:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:04.866 01:10:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:04.866 01:10:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.866 01:10:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.866 01:10:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.866 01:10:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.866 01:10:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.866 01:10:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:04.866 01:10:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.866 01:10:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:04.866 01:10:56 -- setup/hugepages.sh@73 -- # return 0 00:03:04.866 01:10:56 -- setup/hugepages.sh@137 -- # setup output 00:03:04.866 01:10:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.866 01:10:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.242 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:06.242 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:06.242 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:06.242 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:06.242 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:06.242 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:06.242 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:06.242 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:06.242 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:06.242 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:06.242 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:06.242 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:06.242 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:06.242 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:06.242 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:06.242 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:07.184 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:07.184 01:10:58 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:07.184 01:10:58 -- setup/hugepages.sh@89 -- # local node 00:03:07.184 01:10:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.184 01:10:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.184 01:10:58 -- setup/hugepages.sh@92 -- # local surp 00:03:07.184 01:10:58 -- setup/hugepages.sh@93 -- # local resv 00:03:07.184 01:10:58 -- setup/hugepages.sh@94 -- # local anon 00:03:07.184 01:10:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.184 01:10:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.184 01:10:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.184 01:10:58 -- setup/common.sh@18 -- # local node= 00:03:07.184 01:10:58 -- setup/common.sh@19 -- # local var val 00:03:07.184 01:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.184 01:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.184 01:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.184 01:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.184 01:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.184 01:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44812696 kB' 'MemAvailable: 49342760 kB' 'Buffers: 2704 kB' 'Cached: 11213500 kB' 'SwapCached: 0 kB' 'Active: 7223792 kB' 'Inactive: 4516068 kB' 'Active(anon): 6828168 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527020 kB' 'Mapped: 208636 kB' 'Shmem: 6304512 kB' 'KReclaimable: 224424 kB' 'Slab: 596856 kB' 'SReclaimable: 224424 kB' 'SUnreclaim: 372432 kB' 'KernelStack: 12768 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.184 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.184 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.185 01:10:58 -- setup/common.sh@33 -- # echo 0 00:03:07.185 01:10:58 -- setup/common.sh@33 -- # return 0 00:03:07.185 01:10:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:07.185 01:10:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.185 01:10:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.185 01:10:58 -- setup/common.sh@18 -- # local node= 00:03:07.185 01:10:58 -- setup/common.sh@19 -- # local var val 00:03:07.185 01:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.185 01:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.185 01:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.185 01:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.185 01:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.185 01:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44814004 kB' 'MemAvailable: 49344068 kB' 'Buffers: 2704 kB' 'Cached: 11213500 kB' 'SwapCached: 0 kB' 'Active: 7224020 kB' 'Inactive: 4516068 kB' 'Active(anon): 6828396 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527252 kB' 'Mapped: 208728 kB' 'Shmem: 6304512 kB' 'KReclaimable: 224424 kB' 'Slab: 597160 kB' 'SReclaimable: 224424 kB' 'SUnreclaim: 372736 kB' 'KernelStack: 12784 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.185 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.185 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.186 01:10:58 -- setup/common.sh@33 -- # echo 0 00:03:07.186 01:10:58 -- setup/common.sh@33 -- # return 0 00:03:07.186 01:10:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:07.186 01:10:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.186 01:10:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.186 01:10:58 -- setup/common.sh@18 -- # local node= 00:03:07.186 01:10:58 -- setup/common.sh@19 -- # local var val 00:03:07.186 01:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.186 01:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.186 01:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.186 01:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.186 01:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.186 01:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44814960 kB' 'MemAvailable: 49345024 kB' 'Buffers: 2704 kB' 'Cached: 11213512 kB' 'SwapCached: 0 kB' 'Active: 7222864 kB' 'Inactive: 4516068 kB' 'Active(anon): 6827240 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525976 kB' 'Mapped: 208588 kB' 'Shmem: 6304524 kB' 'KReclaimable: 224424 kB' 'Slab: 597132 kB' 'SReclaimable: 224424 kB' 'SUnreclaim: 372708 kB' 'KernelStack: 12752 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.186 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.186 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.187 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.187 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.188 01:10:58 -- setup/common.sh@33 -- # echo 0 00:03:07.188 01:10:58 -- setup/common.sh@33 -- # return 0 00:03:07.188 01:10:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:07.188 01:10:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.188 nr_hugepages=1024 00:03:07.188 01:10:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.188 resv_hugepages=0 00:03:07.188 01:10:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.188 surplus_hugepages=0 00:03:07.188 01:10:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.188 anon_hugepages=0 00:03:07.188 01:10:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.188 01:10:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.188 01:10:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.188 01:10:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.188 01:10:58 -- setup/common.sh@18 -- # local node= 00:03:07.188 01:10:58 -- setup/common.sh@19 -- # local var val 00:03:07.188 01:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.188 01:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.188 01:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.188 01:10:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.188 01:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.188 01:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44815108 kB' 'MemAvailable: 49345172 kB' 'Buffers: 2704 kB' 'Cached: 11213528 kB' 'SwapCached: 0 kB' 'Active: 7222960 kB' 'Inactive: 4516068 kB' 'Active(anon): 6827336 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526056 kB' 'Mapped: 208588 kB' 'Shmem: 6304540 kB' 'KReclaimable: 224424 kB' 'Slab: 597132 kB' 'SReclaimable: 224424 kB' 'SUnreclaim: 372708 kB' 'KernelStack: 12736 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.188 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.188 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.189 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.189 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.190 01:10:58 -- setup/common.sh@33 -- # echo 1024 00:03:07.190 01:10:58 -- setup/common.sh@33 -- # return 0 00:03:07.190 01:10:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.190 01:10:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.190 01:10:58 -- setup/hugepages.sh@27 -- # local node 00:03:07.190 01:10:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.190 01:10:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.190 01:10:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.190 01:10:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:07.190 01:10:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.190 01:10:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.190 01:10:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.190 01:10:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.190 01:10:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.190 01:10:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.190 01:10:58 -- setup/common.sh@18 -- # local node=0 00:03:07.190 01:10:58 -- setup/common.sh@19 -- # local var val 00:03:07.190 01:10:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.190 01:10:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.190 01:10:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.190 01:10:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.190 01:10:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.190 01:10:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21431816 kB' 'MemUsed: 11445124 kB' 'SwapCached: 0 kB' 'Active: 4766652 kB' 'Inactive: 3428384 kB' 'Active(anon): 4494720 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3428384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081664 kB' 'Mapped: 85668 kB' 'AnonPages: 116532 kB' 'Shmem: 4381348 kB' 'KernelStack: 6936 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91252 kB' 'Slab: 305500 kB' 'SReclaimable: 91252 kB' 'SUnreclaim: 214248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.190 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.190 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # continue 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.191 01:10:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.191 01:10:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.191 01:10:58 -- setup/common.sh@33 -- # echo 0 00:03:07.191 01:10:58 -- setup/common.sh@33 -- # return 0 00:03:07.191 01:10:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.191 01:10:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.191 01:10:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.191 01:10:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.191 01:10:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:07.191 node0=1024 expecting 1024 00:03:07.191 01:10:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:07.191 00:03:07.191 real 0m2.478s 00:03:07.191 user 0m0.655s 00:03:07.191 sys 0m0.962s 00:03:07.191 01:10:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.191 01:10:58 -- common/autotest_common.sh@10 -- # set +x 00:03:07.191 ************************************ 00:03:07.191 END TEST default_setup 00:03:07.191 ************************************ 00:03:07.449 01:10:58 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:07.449 01:10:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:07.449 01:10:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:07.449 01:10:58 -- common/autotest_common.sh@10 -- # set +x 00:03:07.449 ************************************ 00:03:07.449 START TEST per_node_1G_alloc 00:03:07.449 ************************************ 00:03:07.449 01:10:58 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:07.449 01:10:58 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:07.449 01:10:58 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:07.449 01:10:58 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:07.449 01:10:58 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:07.449 01:10:58 -- setup/hugepages.sh@51 -- # shift 00:03:07.449 01:10:58 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:07.449 01:10:58 -- setup/hugepages.sh@52 -- # local node_ids 00:03:07.449 01:10:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.449 01:10:58 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:07.449 01:10:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:07.449 01:10:58 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:07.449 01:10:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.449 01:10:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:07.449 01:10:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.449 01:10:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.449 01:10:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.449 01:10:58 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:07.449 01:10:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:07.449 01:10:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:07.449 01:10:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:07.449 01:10:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:07.449 01:10:58 -- setup/hugepages.sh@73 -- # return 0 00:03:07.449 01:10:58 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:07.449 01:10:58 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:07.449 01:10:58 -- setup/hugepages.sh@146 -- # setup output 00:03:07.450 01:10:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.450 01:10:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.394 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:08.394 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.394 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:08.394 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:08.652 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:08.652 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:08.652 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:08.652 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:08.652 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:08.652 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:08.652 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:08.652 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:08.652 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:08.652 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:08.652 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:08.652 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:08.652 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:08.652 01:11:00 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:08.652 01:11:00 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:08.652 01:11:00 -- setup/hugepages.sh@89 -- # local node 00:03:08.652 01:11:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.652 01:11:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.652 01:11:00 -- setup/hugepages.sh@92 -- # local surp 00:03:08.652 01:11:00 -- setup/hugepages.sh@93 -- # local resv 00:03:08.652 01:11:00 -- setup/hugepages.sh@94 -- # local anon 00:03:08.652 01:11:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.652 01:11:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.652 01:11:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.652 01:11:00 -- setup/common.sh@18 -- # local node= 00:03:08.652 01:11:00 -- setup/common.sh@19 -- # local var val 00:03:08.652 01:11:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.652 01:11:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.652 01:11:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.652 01:11:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.652 01:11:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.652 01:11:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.652 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.652 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.652 01:11:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44803536 kB' 'MemAvailable: 49333600 kB' 'Buffers: 2704 kB' 'Cached: 11213584 kB' 'SwapCached: 0 kB' 'Active: 7223344 kB' 'Inactive: 4516068 kB' 'Active(anon): 6827720 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525920 kB' 'Mapped: 209096 kB' 'Shmem: 6304596 kB' 'KReclaimable: 224424 kB' 'Slab: 597408 kB' 'SReclaimable: 224424 kB' 'SUnreclaim: 372984 kB' 'KernelStack: 12720 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:08.652 01:11:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.652 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.652 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.652 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.652 01:11:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.653 01:11:00 -- setup/common.sh@33 -- # echo 0 00:03:08.653 01:11:00 -- setup/common.sh@33 -- # return 0 00:03:08.653 01:11:00 -- setup/hugepages.sh@97 -- # anon=0 00:03:08.653 01:11:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.653 01:11:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.653 01:11:00 -- setup/common.sh@18 -- # local node= 00:03:08.653 01:11:00 -- setup/common.sh@19 -- # local var val 00:03:08.653 01:11:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.653 01:11:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.653 01:11:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.653 01:11:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.653 01:11:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.653 01:11:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44806476 kB' 'MemAvailable: 49336540 kB' 'Buffers: 2704 kB' 'Cached: 11213584 kB' 'SwapCached: 0 kB' 'Active: 7223604 kB' 'Inactive: 4516068 kB' 'Active(anon): 6827980 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526240 kB' 'Mapped: 209096 kB' 'Shmem: 6304596 kB' 'KReclaimable: 224424 kB' 'Slab: 597400 kB' 'SReclaimable: 224424 kB' 'SUnreclaim: 372976 kB' 'KernelStack: 12736 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.653 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.653 01:11:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.654 01:11:00 -- setup/common.sh@33 -- # echo 0 00:03:08.654 01:11:00 -- setup/common.sh@33 -- # return 0 00:03:08.654 01:11:00 -- setup/hugepages.sh@99 -- # surp=0 00:03:08.654 01:11:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.654 01:11:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.654 01:11:00 -- setup/common.sh@18 -- # local node= 00:03:08.654 01:11:00 -- setup/common.sh@19 -- # local var val 00:03:08.654 01:11:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.654 01:11:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.654 01:11:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.654 01:11:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.654 01:11:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.654 01:11:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.654 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.654 01:11:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44814636 kB' 'MemAvailable: 49344700 kB' 'Buffers: 2704 kB' 'Cached: 11213584 kB' 'SwapCached: 0 kB' 'Active: 7223552 kB' 'Inactive: 4516068 kB' 'Active(anon): 6827928 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526564 kB' 'Mapped: 208672 kB' 'Shmem: 6304596 kB' 'KReclaimable: 224424 kB' 'Slab: 597400 kB' 'SReclaimable: 224424 kB' 'SUnreclaim: 372976 kB' 'KernelStack: 12720 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.654 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.655 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.655 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.655 01:11:00 -- setup/common.sh@33 -- # echo 0 00:03:08.655 01:11:00 -- setup/common.sh@33 -- # return 0 00:03:08.655 01:11:00 -- setup/hugepages.sh@100 -- # resv=0 00:03:08.655 01:11:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.655 nr_hugepages=1024 00:03:08.655 01:11:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.655 resv_hugepages=0 00:03:08.655 01:11:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.655 surplus_hugepages=0 00:03:08.655 01:11:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.655 anon_hugepages=0 00:03:08.655 01:11:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.655 01:11:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.655 01:11:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.655 01:11:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.655 01:11:00 -- setup/common.sh@18 -- # local node= 00:03:08.655 01:11:00 -- setup/common.sh@19 -- # local var val 00:03:08.656 01:11:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.656 01:11:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.656 01:11:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.656 01:11:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.656 01:11:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.656 01:11:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44818064 kB' 'MemAvailable: 49348128 kB' 'Buffers: 2704 kB' 'Cached: 11213596 kB' 'SwapCached: 0 kB' 'Active: 7223108 kB' 'Inactive: 4516068 kB' 'Active(anon): 6827484 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526064 kB' 'Mapped: 208596 kB' 'Shmem: 6304608 kB' 'KReclaimable: 224424 kB' 'Slab: 597392 kB' 'SReclaimable: 224424 kB' 'SUnreclaim: 372968 kB' 'KernelStack: 12768 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.656 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.656 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.914 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.914 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.915 01:11:00 -- setup/common.sh@33 -- # echo 1024 00:03:08.915 01:11:00 -- setup/common.sh@33 -- # return 0 00:03:08.915 01:11:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.915 01:11:00 -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.915 01:11:00 -- setup/hugepages.sh@27 -- # local node 00:03:08.915 01:11:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.915 01:11:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.915 01:11:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.915 01:11:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.915 01:11:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.915 01:11:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.915 01:11:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.915 01:11:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.915 01:11:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.915 01:11:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.915 01:11:00 -- setup/common.sh@18 -- # local node=0 00:03:08.915 01:11:00 -- setup/common.sh@19 -- # local var val 00:03:08.915 01:11:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.915 01:11:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.915 01:11:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.915 01:11:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.915 01:11:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.915 01:11:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22500172 kB' 'MemUsed: 10376768 kB' 'SwapCached: 0 kB' 'Active: 4766592 kB' 'Inactive: 3428384 kB' 'Active(anon): 4494660 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3428384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081716 kB' 'Mapped: 85668 kB' 'AnonPages: 116404 kB' 'Shmem: 4381400 kB' 'KernelStack: 6872 kB' 'PageTables: 3036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91252 kB' 'Slab: 305452 kB' 'SReclaimable: 91252 kB' 'SUnreclaim: 214200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.915 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.915 01:11:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@33 -- # echo 0 00:03:08.916 01:11:00 -- setup/common.sh@33 -- # return 0 00:03:08.916 01:11:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.916 01:11:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.916 01:11:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.916 01:11:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:08.916 01:11:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.916 01:11:00 -- setup/common.sh@18 -- # local node=1 00:03:08.916 01:11:00 -- setup/common.sh@19 -- # local var val 00:03:08.916 01:11:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.916 01:11:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.916 01:11:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:08.916 01:11:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:08.916 01:11:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.916 01:11:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22316976 kB' 'MemUsed: 5347796 kB' 'SwapCached: 0 kB' 'Active: 2456476 kB' 'Inactive: 1087684 kB' 'Active(anon): 2332784 kB' 'Inactive(anon): 0 kB' 'Active(file): 123692 kB' 'Inactive(file): 1087684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3134616 kB' 'Mapped: 122928 kB' 'AnonPages: 409592 kB' 'Shmem: 1923240 kB' 'KernelStack: 5864 kB' 'PageTables: 5036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133172 kB' 'Slab: 291932 kB' 'SReclaimable: 133172 kB' 'SUnreclaim: 158760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.916 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.916 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # continue 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.917 01:11:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.917 01:11:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.917 01:11:00 -- setup/common.sh@33 -- # echo 0 00:03:08.917 01:11:00 -- setup/common.sh@33 -- # return 0 00:03:08.917 01:11:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.917 01:11:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.917 01:11:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.917 01:11:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.917 01:11:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:08.917 node0=512 expecting 512 00:03:08.917 01:11:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.917 01:11:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.917 01:11:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.917 01:11:00 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:08.917 node1=512 expecting 512 00:03:08.917 01:11:00 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:08.917 00:03:08.917 real 0m1.512s 00:03:08.917 user 0m0.615s 00:03:08.917 sys 0m0.862s 00:03:08.917 01:11:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.917 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:03:08.917 ************************************ 00:03:08.917 END TEST per_node_1G_alloc 00:03:08.917 ************************************ 00:03:08.917 01:11:00 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:08.917 01:11:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.917 01:11:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.917 01:11:00 -- common/autotest_common.sh@10 -- # set +x 00:03:08.917 ************************************ 00:03:08.917 START TEST even_2G_alloc 00:03:08.917 ************************************ 00:03:08.917 01:11:00 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:08.917 01:11:00 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:08.917 01:11:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:08.917 01:11:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:08.917 01:11:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.917 01:11:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:08.917 01:11:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:08.917 01:11:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:08.917 01:11:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.917 01:11:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.917 01:11:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.917 01:11:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.917 01:11:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.917 01:11:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:08.917 01:11:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:08.917 01:11:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.917 01:11:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:08.917 01:11:00 -- setup/hugepages.sh@83 -- # : 512 00:03:08.917 01:11:00 -- setup/hugepages.sh@84 -- # : 1 00:03:08.917 01:11:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.917 01:11:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:08.917 01:11:00 -- setup/hugepages.sh@83 -- # : 0 00:03:08.917 01:11:00 -- setup/hugepages.sh@84 -- # : 0 00:03:08.917 01:11:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.917 01:11:00 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:08.917 01:11:00 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:08.917 01:11:00 -- setup/hugepages.sh@153 -- # setup output 00:03:08.917 01:11:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.917 01:11:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.850 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:09.850 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.850 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:09.850 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:09.850 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:09.850 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:09.850 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:09.850 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:09.850 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:09.850 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:09.850 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:09.850 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:09.850 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:09.850 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:09.850 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:09.850 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:09.850 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.115 01:11:01 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:10.115 01:11:01 -- setup/hugepages.sh@89 -- # local node 00:03:10.115 01:11:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.115 01:11:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.115 01:11:01 -- setup/hugepages.sh@92 -- # local surp 00:03:10.115 01:11:01 -- setup/hugepages.sh@93 -- # local resv 00:03:10.115 01:11:01 -- setup/hugepages.sh@94 -- # local anon 00:03:10.115 01:11:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.115 01:11:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.115 01:11:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.115 01:11:01 -- setup/common.sh@18 -- # local node= 00:03:10.115 01:11:01 -- setup/common.sh@19 -- # local var val 00:03:10.115 01:11:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.115 01:11:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.115 01:11:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.115 01:11:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.115 01:11:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.115 01:11:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.115 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 01:11:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44804712 kB' 'MemAvailable: 49334760 kB' 'Buffers: 2704 kB' 'Cached: 11213680 kB' 'SwapCached: 0 kB' 'Active: 7223680 kB' 'Inactive: 4516068 kB' 'Active(anon): 6828056 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526716 kB' 'Mapped: 208680 kB' 'Shmem: 6304692 kB' 'KReclaimable: 224392 kB' 'Slab: 597528 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 373136 kB' 'KernelStack: 12784 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:10.115 01:11:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.115 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.115 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 01:11:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.115 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.115 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 01:11:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.117 01:11:01 -- setup/common.sh@33 -- # echo 0 00:03:10.117 01:11:01 -- setup/common.sh@33 -- # return 0 00:03:10.117 01:11:01 -- setup/hugepages.sh@97 -- # anon=0 00:03:10.117 01:11:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.117 01:11:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.117 01:11:01 -- setup/common.sh@18 -- # local node= 00:03:10.117 01:11:01 -- setup/common.sh@19 -- # local var val 00:03:10.117 01:11:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.117 01:11:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.117 01:11:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.117 01:11:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.117 01:11:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.117 01:11:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44804932 kB' 'MemAvailable: 49334980 kB' 'Buffers: 2704 kB' 'Cached: 11213680 kB' 'SwapCached: 0 kB' 'Active: 7224580 kB' 'Inactive: 4516068 kB' 'Active(anon): 6828956 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527592 kB' 'Mapped: 208680 kB' 'Shmem: 6304692 kB' 'KReclaimable: 224392 kB' 'Slab: 597500 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 373108 kB' 'KernelStack: 12816 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.117 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.118 01:11:01 -- setup/common.sh@33 -- # echo 0 00:03:10.118 01:11:01 -- setup/common.sh@33 -- # return 0 00:03:10.118 01:11:01 -- setup/hugepages.sh@99 -- # surp=0 00:03:10.118 01:11:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.118 01:11:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.118 01:11:01 -- setup/common.sh@18 -- # local node= 00:03:10.118 01:11:01 -- setup/common.sh@19 -- # local var val 00:03:10.118 01:11:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.118 01:11:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.118 01:11:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.118 01:11:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.118 01:11:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.118 01:11:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44804932 kB' 'MemAvailable: 49334980 kB' 'Buffers: 2704 kB' 'Cached: 11213684 kB' 'SwapCached: 0 kB' 'Active: 7223964 kB' 'Inactive: 4516068 kB' 'Active(anon): 6828340 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526988 kB' 'Mapped: 208680 kB' 'Shmem: 6304696 kB' 'KReclaimable: 224392 kB' 'Slab: 597500 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 373108 kB' 'KernelStack: 12816 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.118 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.118 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.119 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.119 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.120 01:11:01 -- setup/common.sh@33 -- # echo 0 00:03:10.120 01:11:01 -- setup/common.sh@33 -- # return 0 00:03:10.120 01:11:01 -- setup/hugepages.sh@100 -- # resv=0 00:03:10.120 01:11:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.120 nr_hugepages=1024 00:03:10.120 01:11:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.120 resv_hugepages=0 00:03:10.120 01:11:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.120 surplus_hugepages=0 00:03:10.120 01:11:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.120 anon_hugepages=0 00:03:10.120 01:11:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.120 01:11:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.120 01:11:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.120 01:11:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.120 01:11:01 -- setup/common.sh@18 -- # local node= 00:03:10.120 01:11:01 -- setup/common.sh@19 -- # local var val 00:03:10.120 01:11:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.120 01:11:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.120 01:11:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.120 01:11:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.120 01:11:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.120 01:11:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44805988 kB' 'MemAvailable: 49336036 kB' 'Buffers: 2704 kB' 'Cached: 11213708 kB' 'SwapCached: 0 kB' 'Active: 7223300 kB' 'Inactive: 4516068 kB' 'Active(anon): 6827676 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526224 kB' 'Mapped: 208604 kB' 'Shmem: 6304720 kB' 'KReclaimable: 224392 kB' 'Slab: 597524 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 373132 kB' 'KernelStack: 12816 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7911788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.120 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.120 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.121 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.121 01:11:01 -- setup/common.sh@33 -- # echo 1024 00:03:10.121 01:11:01 -- setup/common.sh@33 -- # return 0 00:03:10.121 01:11:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.121 01:11:01 -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.121 01:11:01 -- setup/hugepages.sh@27 -- # local node 00:03:10.121 01:11:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.121 01:11:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.121 01:11:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.121 01:11:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.121 01:11:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.121 01:11:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.121 01:11:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.121 01:11:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.121 01:11:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.121 01:11:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.121 01:11:01 -- setup/common.sh@18 -- # local node=0 00:03:10.121 01:11:01 -- setup/common.sh@19 -- # local var val 00:03:10.121 01:11:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.121 01:11:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.121 01:11:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.121 01:11:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.121 01:11:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.121 01:11:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.121 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22486100 kB' 'MemUsed: 10390840 kB' 'SwapCached: 0 kB' 'Active: 4766816 kB' 'Inactive: 3428384 kB' 'Active(anon): 4494884 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3428384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081808 kB' 'Mapped: 85668 kB' 'AnonPages: 116536 kB' 'Shmem: 4381492 kB' 'KernelStack: 6920 kB' 'PageTables: 3204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91252 kB' 'Slab: 305548 kB' 'SReclaimable: 91252 kB' 'SUnreclaim: 214296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.122 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.122 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.122 01:11:01 -- setup/common.sh@33 -- # echo 0 00:03:10.122 01:11:01 -- setup/common.sh@33 -- # return 0 00:03:10.122 01:11:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.122 01:11:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.122 01:11:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.122 01:11:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:10.123 01:11:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.123 01:11:01 -- setup/common.sh@18 -- # local node=1 00:03:10.123 01:11:01 -- setup/common.sh@19 -- # local var val 00:03:10.123 01:11:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.123 01:11:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.123 01:11:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:10.123 01:11:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:10.123 01:11:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.123 01:11:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22320180 kB' 'MemUsed: 5344592 kB' 'SwapCached: 0 kB' 'Active: 2456524 kB' 'Inactive: 1087684 kB' 'Active(anon): 2332832 kB' 'Inactive(anon): 0 kB' 'Active(file): 123692 kB' 'Inactive(file): 1087684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3134620 kB' 'Mapped: 122936 kB' 'AnonPages: 409684 kB' 'Shmem: 1923244 kB' 'KernelStack: 5896 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133140 kB' 'Slab: 291976 kB' 'SReclaimable: 133140 kB' 'SUnreclaim: 158836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.123 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.123 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.124 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.124 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.124 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.124 01:11:01 -- setup/common.sh@32 -- # continue 00:03:10.124 01:11:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.124 01:11:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.124 01:11:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.124 01:11:01 -- setup/common.sh@33 -- # echo 0 00:03:10.124 01:11:01 -- setup/common.sh@33 -- # return 0 00:03:10.124 01:11:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.124 01:11:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.124 01:11:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.124 01:11:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.124 01:11:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:10.124 node0=512 expecting 512 00:03:10.124 01:11:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.124 01:11:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.124 01:11:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.124 01:11:01 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:10.124 node1=512 expecting 512 00:03:10.124 01:11:01 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:10.124 00:03:10.124 real 0m1.362s 00:03:10.124 user 0m0.586s 00:03:10.124 sys 0m0.738s 00:03:10.124 01:11:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.124 01:11:01 -- common/autotest_common.sh@10 -- # set +x 00:03:10.124 ************************************ 00:03:10.124 END TEST even_2G_alloc 00:03:10.124 ************************************ 00:03:10.424 01:11:01 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:10.424 01:11:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:10.424 01:11:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:10.424 01:11:01 -- common/autotest_common.sh@10 -- # set +x 00:03:10.424 ************************************ 00:03:10.424 START TEST odd_alloc 00:03:10.424 ************************************ 00:03:10.424 01:11:01 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:10.424 01:11:01 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:10.424 01:11:01 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:10.424 01:11:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.424 01:11:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.424 01:11:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:10.424 01:11:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.424 01:11:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.424 01:11:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.424 01:11:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:10.424 01:11:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.424 01:11:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.424 01:11:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.424 01:11:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.424 01:11:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.424 01:11:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.424 01:11:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.424 01:11:01 -- setup/hugepages.sh@83 -- # : 513 00:03:10.424 01:11:01 -- setup/hugepages.sh@84 -- # : 1 00:03:10.424 01:11:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.424 01:11:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:10.424 01:11:01 -- setup/hugepages.sh@83 -- # : 0 00:03:10.424 01:11:01 -- setup/hugepages.sh@84 -- # : 0 00:03:10.424 01:11:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.424 01:11:01 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:10.424 01:11:01 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:10.424 01:11:01 -- setup/hugepages.sh@160 -- # setup output 00:03:10.424 01:11:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.424 01:11:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.360 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:11.360 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.360 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:11.360 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:11.360 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:11.360 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:11.360 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:11.360 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:11.360 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:11.360 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:11.360 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:11.360 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:11.360 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:11.360 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:11.360 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:11.360 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:11.360 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:11.624 01:11:03 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:11.624 01:11:03 -- setup/hugepages.sh@89 -- # local node 00:03:11.624 01:11:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.624 01:11:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.624 01:11:03 -- setup/hugepages.sh@92 -- # local surp 00:03:11.624 01:11:03 -- setup/hugepages.sh@93 -- # local resv 00:03:11.624 01:11:03 -- setup/hugepages.sh@94 -- # local anon 00:03:11.624 01:11:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.624 01:11:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.624 01:11:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.624 01:11:03 -- setup/common.sh@18 -- # local node= 00:03:11.624 01:11:03 -- setup/common.sh@19 -- # local var val 00:03:11.624 01:11:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.624 01:11:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.624 01:11:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.624 01:11:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.624 01:11:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.624 01:11:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 01:11:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44789660 kB' 'MemAvailable: 49319708 kB' 'Buffers: 2704 kB' 'Cached: 11213764 kB' 'SwapCached: 0 kB' 'Active: 7220148 kB' 'Inactive: 4516068 kB' 'Active(anon): 6824524 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522888 kB' 'Mapped: 207856 kB' 'Shmem: 6304776 kB' 'KReclaimable: 224392 kB' 'Slab: 597328 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372936 kB' 'KernelStack: 12720 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7896472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:11.624 01:11:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.624 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 01:11:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.624 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 01:11:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.624 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 01:11:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.624 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.624 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.624 01:11:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.625 01:11:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.625 01:11:03 -- setup/common.sh@33 -- # echo 0 00:03:11.625 01:11:03 -- setup/common.sh@33 -- # return 0 00:03:11.625 01:11:03 -- setup/hugepages.sh@97 -- # anon=0 00:03:11.625 01:11:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.625 01:11:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.625 01:11:03 -- setup/common.sh@18 -- # local node= 00:03:11.625 01:11:03 -- setup/common.sh@19 -- # local var val 00:03:11.625 01:11:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.625 01:11:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.625 01:11:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.625 01:11:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.625 01:11:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.625 01:11:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.625 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44790620 kB' 'MemAvailable: 49320668 kB' 'Buffers: 2704 kB' 'Cached: 11213764 kB' 'SwapCached: 0 kB' 'Active: 7221572 kB' 'Inactive: 4516068 kB' 'Active(anon): 6825948 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524356 kB' 'Mapped: 207856 kB' 'Shmem: 6304776 kB' 'KReclaimable: 224392 kB' 'Slab: 597312 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372920 kB' 'KernelStack: 12816 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7899020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.626 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.626 01:11:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.627 01:11:03 -- setup/common.sh@33 -- # echo 0 00:03:11.627 01:11:03 -- setup/common.sh@33 -- # return 0 00:03:11.627 01:11:03 -- setup/hugepages.sh@99 -- # surp=0 00:03:11.627 01:11:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.627 01:11:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.627 01:11:03 -- setup/common.sh@18 -- # local node= 00:03:11.627 01:11:03 -- setup/common.sh@19 -- # local var val 00:03:11.627 01:11:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.627 01:11:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.627 01:11:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.627 01:11:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.627 01:11:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.627 01:11:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44791396 kB' 'MemAvailable: 49321444 kB' 'Buffers: 2704 kB' 'Cached: 11213772 kB' 'SwapCached: 0 kB' 'Active: 7220836 kB' 'Inactive: 4516068 kB' 'Active(anon): 6825212 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523668 kB' 'Mapped: 207760 kB' 'Shmem: 6304784 kB' 'KReclaimable: 224392 kB' 'Slab: 597296 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372904 kB' 'KernelStack: 12896 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7900520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.627 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.627 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.628 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.628 01:11:03 -- setup/common.sh@33 -- # echo 0 00:03:11.628 01:11:03 -- setup/common.sh@33 -- # return 0 00:03:11.628 01:11:03 -- setup/hugepages.sh@100 -- # resv=0 00:03:11.628 01:11:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:11.628 nr_hugepages=1025 00:03:11.628 01:11:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.628 resv_hugepages=0 00:03:11.628 01:11:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.628 surplus_hugepages=0 00:03:11.628 01:11:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.628 anon_hugepages=0 00:03:11.628 01:11:03 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.628 01:11:03 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:11.628 01:11:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.628 01:11:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.628 01:11:03 -- setup/common.sh@18 -- # local node= 00:03:11.628 01:11:03 -- setup/common.sh@19 -- # local var val 00:03:11.628 01:11:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.628 01:11:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.628 01:11:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.628 01:11:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.628 01:11:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.628 01:11:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.628 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44798024 kB' 'MemAvailable: 49328072 kB' 'Buffers: 2704 kB' 'Cached: 11213780 kB' 'SwapCached: 0 kB' 'Active: 7221740 kB' 'Inactive: 4516068 kB' 'Active(anon): 6826116 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524576 kB' 'Mapped: 207760 kB' 'Shmem: 6304792 kB' 'KReclaimable: 224392 kB' 'Slab: 597296 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372904 kB' 'KernelStack: 13120 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7900532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.629 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.629 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.630 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.630 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.631 01:11:03 -- setup/common.sh@33 -- # echo 1025 00:03:11.631 01:11:03 -- setup/common.sh@33 -- # return 0 00:03:11.631 01:11:03 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.631 01:11:03 -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.631 01:11:03 -- setup/hugepages.sh@27 -- # local node 00:03:11.631 01:11:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.631 01:11:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.631 01:11:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.631 01:11:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:11.631 01:11:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.631 01:11:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.631 01:11:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.631 01:11:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.631 01:11:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.631 01:11:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.631 01:11:03 -- setup/common.sh@18 -- # local node=0 00:03:11.631 01:11:03 -- setup/common.sh@19 -- # local var val 00:03:11.631 01:11:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.631 01:11:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.631 01:11:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.631 01:11:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.631 01:11:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.631 01:11:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22487508 kB' 'MemUsed: 10389432 kB' 'SwapCached: 0 kB' 'Active: 4765284 kB' 'Inactive: 3428384 kB' 'Active(anon): 4493352 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3428384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081888 kB' 'Mapped: 85064 kB' 'AnonPages: 114940 kB' 'Shmem: 4381572 kB' 'KernelStack: 6936 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91252 kB' 'Slab: 305464 kB' 'SReclaimable: 91252 kB' 'SUnreclaim: 214212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.631 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.631 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@33 -- # echo 0 00:03:11.632 01:11:03 -- setup/common.sh@33 -- # return 0 00:03:11.632 01:11:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.632 01:11:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.632 01:11:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.632 01:11:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.632 01:11:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.632 01:11:03 -- setup/common.sh@18 -- # local node=1 00:03:11.632 01:11:03 -- setup/common.sh@19 -- # local var val 00:03:11.632 01:11:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.632 01:11:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.632 01:11:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.632 01:11:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.632 01:11:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.632 01:11:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22314864 kB' 'MemUsed: 5349908 kB' 'SwapCached: 0 kB' 'Active: 2456584 kB' 'Inactive: 1087684 kB' 'Active(anon): 2332892 kB' 'Inactive(anon): 0 kB' 'Active(file): 123692 kB' 'Inactive(file): 1087684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3134620 kB' 'Mapped: 122696 kB' 'AnonPages: 409656 kB' 'Shmem: 1923244 kB' 'KernelStack: 6072 kB' 'PageTables: 6460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133140 kB' 'Slab: 291800 kB' 'SReclaimable: 133140 kB' 'SUnreclaim: 158660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.632 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.632 01:11:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # continue 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.633 01:11:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.633 01:11:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.633 01:11:03 -- setup/common.sh@33 -- # echo 0 00:03:11.633 01:11:03 -- setup/common.sh@33 -- # return 0 00:03:11.633 01:11:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.633 01:11:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.633 01:11:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.633 01:11:03 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:11.633 node0=512 expecting 513 00:03:11.633 01:11:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.633 01:11:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.633 01:11:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.633 01:11:03 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:11.633 node1=513 expecting 512 00:03:11.633 01:11:03 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:11.633 00:03:11.633 real 0m1.420s 00:03:11.633 user 0m0.546s 00:03:11.633 sys 0m0.837s 00:03:11.633 01:11:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.633 01:11:03 -- common/autotest_common.sh@10 -- # set +x 00:03:11.633 ************************************ 00:03:11.633 END TEST odd_alloc 00:03:11.633 ************************************ 00:03:11.633 01:11:03 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:11.633 01:11:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:11.633 01:11:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:11.633 01:11:03 -- common/autotest_common.sh@10 -- # set +x 00:03:11.633 ************************************ 00:03:11.633 START TEST custom_alloc 00:03:11.633 ************************************ 00:03:11.633 01:11:03 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:11.633 01:11:03 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:11.633 01:11:03 -- setup/hugepages.sh@169 -- # local node 00:03:11.633 01:11:03 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:11.633 01:11:03 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:11.633 01:11:03 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:11.633 01:11:03 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:11.633 01:11:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:11.633 01:11:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:11.633 01:11:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.633 01:11:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.633 01:11:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.633 01:11:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.633 01:11:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.633 01:11:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.633 01:11:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.633 01:11:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:11.633 01:11:03 -- setup/hugepages.sh@83 -- # : 256 00:03:11.633 01:11:03 -- setup/hugepages.sh@84 -- # : 1 00:03:11.633 01:11:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:11.633 01:11:03 -- setup/hugepages.sh@83 -- # : 0 00:03:11.633 01:11:03 -- setup/hugepages.sh@84 -- # : 0 00:03:11.633 01:11:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:11.633 01:11:03 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:11.633 01:11:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.633 01:11:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.633 01:11:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.633 01:11:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.633 01:11:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.633 01:11:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.633 01:11:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.633 01:11:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.633 01:11:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.633 01:11:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.633 01:11:03 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:11.633 01:11:03 -- setup/hugepages.sh@78 -- # return 0 00:03:11.633 01:11:03 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:11.633 01:11:03 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:11.633 01:11:03 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:11.633 01:11:03 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:11.633 01:11:03 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:11.633 01:11:03 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:11.633 01:11:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.633 01:11:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.633 01:11:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.633 01:11:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.633 01:11:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.633 01:11:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.633 01:11:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:11.633 01:11:03 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.634 01:11:03 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:11.634 01:11:03 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.634 01:11:03 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:11.634 01:11:03 -- setup/hugepages.sh@78 -- # return 0 00:03:11.634 01:11:03 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:11.634 01:11:03 -- setup/hugepages.sh@187 -- # setup output 00:03:11.634 01:11:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.634 01:11:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.012 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.012 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:13.012 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.012 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.012 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.012 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.012 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.012 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.012 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:13.012 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.012 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.012 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.012 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.012 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.012 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.012 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.012 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:13.012 01:11:04 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:13.012 01:11:04 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:13.012 01:11:04 -- setup/hugepages.sh@89 -- # local node 00:03:13.012 01:11:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.012 01:11:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.012 01:11:04 -- setup/hugepages.sh@92 -- # local surp 00:03:13.012 01:11:04 -- setup/hugepages.sh@93 -- # local resv 00:03:13.012 01:11:04 -- setup/hugepages.sh@94 -- # local anon 00:03:13.012 01:11:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.012 01:11:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.012 01:11:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.012 01:11:04 -- setup/common.sh@18 -- # local node= 00:03:13.012 01:11:04 -- setup/common.sh@19 -- # local var val 00:03:13.012 01:11:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.012 01:11:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.012 01:11:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.012 01:11:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.012 01:11:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.012 01:11:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.012 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 01:11:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43742604 kB' 'MemAvailable: 48272652 kB' 'Buffers: 2704 kB' 'Cached: 11213860 kB' 'SwapCached: 0 kB' 'Active: 7220344 kB' 'Inactive: 4516068 kB' 'Active(anon): 6824720 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523008 kB' 'Mapped: 207736 kB' 'Shmem: 6304872 kB' 'KReclaimable: 224392 kB' 'Slab: 597056 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372664 kB' 'KernelStack: 12768 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7896704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.013 01:11:04 -- setup/common.sh@33 -- # echo 0 00:03:13.013 01:11:04 -- setup/common.sh@33 -- # return 0 00:03:13.013 01:11:04 -- setup/hugepages.sh@97 -- # anon=0 00:03:13.013 01:11:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.013 01:11:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.013 01:11:04 -- setup/common.sh@18 -- # local node= 00:03:13.013 01:11:04 -- setup/common.sh@19 -- # local var val 00:03:13.013 01:11:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.013 01:11:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.013 01:11:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.013 01:11:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.013 01:11:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.013 01:11:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43743468 kB' 'MemAvailable: 48273516 kB' 'Buffers: 2704 kB' 'Cached: 11213860 kB' 'SwapCached: 0 kB' 'Active: 7221440 kB' 'Inactive: 4516068 kB' 'Active(anon): 6825816 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524116 kB' 'Mapped: 207736 kB' 'Shmem: 6304872 kB' 'KReclaimable: 224392 kB' 'Slab: 597032 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372640 kB' 'KernelStack: 12832 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7896716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.013 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.013 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.014 01:11:04 -- setup/common.sh@33 -- # echo 0 00:03:13.014 01:11:04 -- setup/common.sh@33 -- # return 0 00:03:13.014 01:11:04 -- setup/hugepages.sh@99 -- # surp=0 00:03:13.014 01:11:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.014 01:11:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.014 01:11:04 -- setup/common.sh@18 -- # local node= 00:03:13.014 01:11:04 -- setup/common.sh@19 -- # local var val 00:03:13.014 01:11:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.014 01:11:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.014 01:11:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.014 01:11:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.014 01:11:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.014 01:11:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 01:11:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43743480 kB' 'MemAvailable: 48273528 kB' 'Buffers: 2704 kB' 'Cached: 11213876 kB' 'SwapCached: 0 kB' 'Active: 7220316 kB' 'Inactive: 4516068 kB' 'Active(anon): 6824692 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522964 kB' 'Mapped: 207732 kB' 'Shmem: 6304888 kB' 'KReclaimable: 224392 kB' 'Slab: 597080 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372688 kB' 'KernelStack: 12784 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7896732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.014 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.015 01:11:04 -- setup/common.sh@33 -- # echo 0 00:03:13.015 01:11:04 -- setup/common.sh@33 -- # return 0 00:03:13.015 01:11:04 -- setup/hugepages.sh@100 -- # resv=0 00:03:13.015 01:11:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:13.015 nr_hugepages=1536 00:03:13.015 01:11:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.015 resv_hugepages=0 00:03:13.015 01:11:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.015 surplus_hugepages=0 00:03:13.015 01:11:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.015 anon_hugepages=0 00:03:13.015 01:11:04 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:13.015 01:11:04 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:13.015 01:11:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.015 01:11:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.015 01:11:04 -- setup/common.sh@18 -- # local node= 00:03:13.015 01:11:04 -- setup/common.sh@19 -- # local var val 00:03:13.015 01:11:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.015 01:11:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.015 01:11:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.015 01:11:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.015 01:11:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.015 01:11:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43743424 kB' 'MemAvailable: 48273472 kB' 'Buffers: 2704 kB' 'Cached: 11213892 kB' 'SwapCached: 0 kB' 'Active: 7220128 kB' 'Inactive: 4516068 kB' 'Active(anon): 6824504 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522824 kB' 'Mapped: 207732 kB' 'Shmem: 6304904 kB' 'KReclaimable: 224392 kB' 'Slab: 597080 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372688 kB' 'KernelStack: 12736 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7896376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.016 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.016 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.017 01:11:04 -- setup/common.sh@33 -- # echo 1536 00:03:13.017 01:11:04 -- setup/common.sh@33 -- # return 0 00:03:13.017 01:11:04 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:13.017 01:11:04 -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.017 01:11:04 -- setup/hugepages.sh@27 -- # local node 00:03:13.017 01:11:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.017 01:11:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.017 01:11:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.017 01:11:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:13.017 01:11:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.017 01:11:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.017 01:11:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.017 01:11:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.017 01:11:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.017 01:11:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.017 01:11:04 -- setup/common.sh@18 -- # local node=0 00:03:13.017 01:11:04 -- setup/common.sh@19 -- # local var val 00:03:13.017 01:11:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.017 01:11:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.017 01:11:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.017 01:11:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.017 01:11:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.017 01:11:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22477744 kB' 'MemUsed: 10399196 kB' 'SwapCached: 0 kB' 'Active: 4763868 kB' 'Inactive: 3428384 kB' 'Active(anon): 4491936 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3428384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081964 kB' 'Mapped: 85064 kB' 'AnonPages: 113392 kB' 'Shmem: 4381648 kB' 'KernelStack: 6840 kB' 'PageTables: 2864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91252 kB' 'Slab: 305348 kB' 'SReclaimable: 91252 kB' 'SUnreclaim: 214096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 01:11:04 -- setup/common.sh@33 -- # echo 0 00:03:13.017 01:11:04 -- setup/common.sh@33 -- # return 0 00:03:13.017 01:11:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.017 01:11:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.017 01:11:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.017 01:11:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.017 01:11:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.017 01:11:04 -- setup/common.sh@18 -- # local node=1 00:03:13.017 01:11:04 -- setup/common.sh@19 -- # local var val 00:03:13.017 01:11:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.017 01:11:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.017 01:11:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.017 01:11:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.017 01:11:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.017 01:11:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21270208 kB' 'MemUsed: 6394564 kB' 'SwapCached: 0 kB' 'Active: 2455956 kB' 'Inactive: 1087684 kB' 'Active(anon): 2332264 kB' 'Inactive(anon): 0 kB' 'Active(file): 123692 kB' 'Inactive(file): 1087684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3134660 kB' 'Mapped: 122668 kB' 'AnonPages: 409096 kB' 'Shmem: 1923284 kB' 'KernelStack: 5944 kB' 'PageTables: 4972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133140 kB' 'Slab: 291732 kB' 'SReclaimable: 133140 kB' 'SUnreclaim: 158592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # continue 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 01:11:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 01:11:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 01:11:04 -- setup/common.sh@33 -- # echo 0 00:03:13.018 01:11:04 -- setup/common.sh@33 -- # return 0 00:03:13.018 01:11:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.018 01:11:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.018 01:11:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.018 01:11:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.018 01:11:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:13.018 node0=512 expecting 512 00:03:13.018 01:11:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.018 01:11:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.018 01:11:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.018 01:11:04 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:13.018 node1=1024 expecting 1024 00:03:13.018 01:11:04 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:13.018 00:03:13.018 real 0m1.401s 00:03:13.018 user 0m0.599s 00:03:13.018 sys 0m0.766s 00:03:13.018 01:11:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.018 01:11:04 -- common/autotest_common.sh@10 -- # set +x 00:03:13.018 ************************************ 00:03:13.018 END TEST custom_alloc 00:03:13.018 ************************************ 00:03:13.018 01:11:04 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:13.018 01:11:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:13.018 01:11:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:13.018 01:11:04 -- common/autotest_common.sh@10 -- # set +x 00:03:13.018 ************************************ 00:03:13.018 START TEST no_shrink_alloc 00:03:13.018 ************************************ 00:03:13.018 01:11:04 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:13.018 01:11:04 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:13.018 01:11:04 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.018 01:11:04 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:13.018 01:11:04 -- setup/hugepages.sh@51 -- # shift 00:03:13.018 01:11:04 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:13.018 01:11:04 -- setup/hugepages.sh@52 -- # local node_ids 00:03:13.018 01:11:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.018 01:11:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.018 01:11:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:13.018 01:11:04 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:13.018 01:11:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.018 01:11:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.018 01:11:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.018 01:11:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.018 01:11:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.018 01:11:04 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:13.018 01:11:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:13.018 01:11:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:13.018 01:11:04 -- setup/hugepages.sh@73 -- # return 0 00:03:13.018 01:11:04 -- setup/hugepages.sh@198 -- # setup output 00:03:13.018 01:11:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.018 01:11:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.398 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:14.398 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.398 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:14.398 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:14.398 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:14.398 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:14.398 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:14.398 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:14.398 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.398 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:14.398 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:14.398 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:14.398 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:14.398 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:14.398 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:14.398 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:14.398 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.398 01:11:06 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:14.398 01:11:06 -- setup/hugepages.sh@89 -- # local node 00:03:14.398 01:11:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.398 01:11:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.398 01:11:06 -- setup/hugepages.sh@92 -- # local surp 00:03:14.398 01:11:06 -- setup/hugepages.sh@93 -- # local resv 00:03:14.398 01:11:06 -- setup/hugepages.sh@94 -- # local anon 00:03:14.398 01:11:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.398 01:11:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.398 01:11:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.398 01:11:06 -- setup/common.sh@18 -- # local node= 00:03:14.398 01:11:06 -- setup/common.sh@19 -- # local var val 00:03:14.398 01:11:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.398 01:11:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.398 01:11:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.398 01:11:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.398 01:11:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.398 01:11:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.398 01:11:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44763372 kB' 'MemAvailable: 49293420 kB' 'Buffers: 2704 kB' 'Cached: 11213956 kB' 'SwapCached: 0 kB' 'Active: 7225752 kB' 'Inactive: 4516068 kB' 'Active(anon): 6830128 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528296 kB' 'Mapped: 208212 kB' 'Shmem: 6304968 kB' 'KReclaimable: 224392 kB' 'Slab: 596924 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372532 kB' 'KernelStack: 12768 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7903048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.398 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.398 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 01:11:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 01:11:06 -- setup/common.sh@33 -- # echo 0 00:03:14.399 01:11:06 -- setup/common.sh@33 -- # return 0 00:03:14.399 01:11:06 -- setup/hugepages.sh@97 -- # anon=0 00:03:14.399 01:11:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.399 01:11:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.399 01:11:06 -- setup/common.sh@18 -- # local node= 00:03:14.399 01:11:06 -- setup/common.sh@19 -- # local var val 00:03:14.399 01:11:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.399 01:11:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.399 01:11:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.399 01:11:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.399 01:11:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.399 01:11:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44767476 kB' 'MemAvailable: 49297524 kB' 'Buffers: 2704 kB' 'Cached: 11213956 kB' 'SwapCached: 0 kB' 'Active: 7227056 kB' 'Inactive: 4516068 kB' 'Active(anon): 6831432 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529652 kB' 'Mapped: 208632 kB' 'Shmem: 6304968 kB' 'KReclaimable: 224392 kB' 'Slab: 596984 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372592 kB' 'KernelStack: 12784 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7903060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 01:11:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 01:11:06 -- setup/common.sh@33 -- # echo 0 00:03:14.401 01:11:06 -- setup/common.sh@33 -- # return 0 00:03:14.401 01:11:06 -- setup/hugepages.sh@99 -- # surp=0 00:03:14.401 01:11:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.401 01:11:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.401 01:11:06 -- setup/common.sh@18 -- # local node= 00:03:14.401 01:11:06 -- setup/common.sh@19 -- # local var val 00:03:14.401 01:11:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.401 01:11:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.401 01:11:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.401 01:11:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.401 01:11:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.401 01:11:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44768740 kB' 'MemAvailable: 49298788 kB' 'Buffers: 2704 kB' 'Cached: 11213972 kB' 'SwapCached: 0 kB' 'Active: 7220316 kB' 'Inactive: 4516068 kB' 'Active(anon): 6824692 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522912 kB' 'Mapped: 208120 kB' 'Shmem: 6304984 kB' 'KReclaimable: 224392 kB' 'Slab: 596992 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372600 kB' 'KernelStack: 12784 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7896960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 01:11:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 01:11:06 -- setup/common.sh@33 -- # echo 0 00:03:14.402 01:11:06 -- setup/common.sh@33 -- # return 0 00:03:14.402 01:11:06 -- setup/hugepages.sh@100 -- # resv=0 00:03:14.402 01:11:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.402 nr_hugepages=1024 00:03:14.402 01:11:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.402 resv_hugepages=0 00:03:14.402 01:11:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.402 surplus_hugepages=0 00:03:14.402 01:11:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.402 anon_hugepages=0 00:03:14.402 01:11:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.402 01:11:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.402 01:11:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.402 01:11:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.402 01:11:06 -- setup/common.sh@18 -- # local node= 00:03:14.402 01:11:06 -- setup/common.sh@19 -- # local var val 00:03:14.663 01:11:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.663 01:11:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.663 01:11:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.663 01:11:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.663 01:11:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.663 01:11:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.663 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.663 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.663 01:11:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44767760 kB' 'MemAvailable: 49297808 kB' 'Buffers: 2704 kB' 'Cached: 11213976 kB' 'SwapCached: 0 kB' 'Active: 7220548 kB' 'Inactive: 4516068 kB' 'Active(anon): 6824924 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523156 kB' 'Mapped: 207776 kB' 'Shmem: 6304988 kB' 'KReclaimable: 224392 kB' 'Slab: 596992 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372600 kB' 'KernelStack: 12800 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7896976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:14.663 01:11:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.663 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.663 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.663 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.664 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.664 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.665 01:11:06 -- setup/common.sh@33 -- # echo 1024 00:03:14.665 01:11:06 -- setup/common.sh@33 -- # return 0 00:03:14.665 01:11:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.665 01:11:06 -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.665 01:11:06 -- setup/hugepages.sh@27 -- # local node 00:03:14.665 01:11:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.665 01:11:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:14.665 01:11:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.665 01:11:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:14.665 01:11:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.665 01:11:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.665 01:11:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.665 01:11:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.665 01:11:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.665 01:11:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.665 01:11:06 -- setup/common.sh@18 -- # local node=0 00:03:14.665 01:11:06 -- setup/common.sh@19 -- # local var val 00:03:14.665 01:11:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.665 01:11:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.665 01:11:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.665 01:11:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.665 01:11:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.665 01:11:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21399184 kB' 'MemUsed: 11477756 kB' 'SwapCached: 0 kB' 'Active: 4764276 kB' 'Inactive: 3428384 kB' 'Active(anon): 4492344 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3428384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081972 kB' 'Mapped: 85064 kB' 'AnonPages: 113836 kB' 'Shmem: 4381656 kB' 'KernelStack: 6856 kB' 'PageTables: 2960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91252 kB' 'Slab: 305224 kB' 'SReclaimable: 91252 kB' 'SUnreclaim: 213972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.665 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.665 01:11:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # continue 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.666 01:11:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.666 01:11:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.666 01:11:06 -- setup/common.sh@33 -- # echo 0 00:03:14.666 01:11:06 -- setup/common.sh@33 -- # return 0 00:03:14.666 01:11:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.666 01:11:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.666 01:11:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.666 01:11:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.666 01:11:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:14.666 node0=1024 expecting 1024 00:03:14.666 01:11:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:14.666 01:11:06 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:14.666 01:11:06 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:14.666 01:11:06 -- setup/hugepages.sh@202 -- # setup output 00:03:14.666 01:11:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.666 01:11:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.603 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.603 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.603 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.603 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.603 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.603 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.603 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.603 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.603 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.603 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.603 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.603 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.603 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.603 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.603 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.603 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.603 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.865 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:15.865 01:11:07 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:15.865 01:11:07 -- setup/hugepages.sh@89 -- # local node 00:03:15.865 01:11:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.865 01:11:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.865 01:11:07 -- setup/hugepages.sh@92 -- # local surp 00:03:15.865 01:11:07 -- setup/hugepages.sh@93 -- # local resv 00:03:15.865 01:11:07 -- setup/hugepages.sh@94 -- # local anon 00:03:15.865 01:11:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.865 01:11:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.865 01:11:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.865 01:11:07 -- setup/common.sh@18 -- # local node= 00:03:15.865 01:11:07 -- setup/common.sh@19 -- # local var val 00:03:15.865 01:11:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.865 01:11:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.865 01:11:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.865 01:11:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.865 01:11:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.865 01:11:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.865 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.865 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44766160 kB' 'MemAvailable: 49296208 kB' 'Buffers: 2704 kB' 'Cached: 11214036 kB' 'SwapCached: 0 kB' 'Active: 7220868 kB' 'Inactive: 4516068 kB' 'Active(anon): 6825244 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523384 kB' 'Mapped: 207824 kB' 'Shmem: 6305048 kB' 'KReclaimable: 224392 kB' 'Slab: 596856 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372464 kB' 'KernelStack: 12784 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7897136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.866 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.866 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.867 01:11:07 -- setup/common.sh@33 -- # echo 0 00:03:15.867 01:11:07 -- setup/common.sh@33 -- # return 0 00:03:15.867 01:11:07 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.867 01:11:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.867 01:11:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.867 01:11:07 -- setup/common.sh@18 -- # local node= 00:03:15.867 01:11:07 -- setup/common.sh@19 -- # local var val 00:03:15.867 01:11:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.867 01:11:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.867 01:11:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.867 01:11:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.867 01:11:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.867 01:11:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44766788 kB' 'MemAvailable: 49296836 kB' 'Buffers: 2704 kB' 'Cached: 11214040 kB' 'SwapCached: 0 kB' 'Active: 7221224 kB' 'Inactive: 4516068 kB' 'Active(anon): 6825600 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523800 kB' 'Mapped: 207912 kB' 'Shmem: 6305052 kB' 'KReclaimable: 224392 kB' 'Slab: 596880 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372488 kB' 'KernelStack: 12800 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7897148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.867 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.867 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.868 01:11:07 -- setup/common.sh@33 -- # echo 0 00:03:15.868 01:11:07 -- setup/common.sh@33 -- # return 0 00:03:15.868 01:11:07 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.868 01:11:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.868 01:11:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.868 01:11:07 -- setup/common.sh@18 -- # local node= 00:03:15.868 01:11:07 -- setup/common.sh@19 -- # local var val 00:03:15.868 01:11:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.868 01:11:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.868 01:11:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.868 01:11:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.868 01:11:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.868 01:11:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44767020 kB' 'MemAvailable: 49297068 kB' 'Buffers: 2704 kB' 'Cached: 11214052 kB' 'SwapCached: 0 kB' 'Active: 7220428 kB' 'Inactive: 4516068 kB' 'Active(anon): 6824804 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523000 kB' 'Mapped: 207784 kB' 'Shmem: 6305064 kB' 'KReclaimable: 224392 kB' 'Slab: 596888 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372496 kB' 'KernelStack: 12816 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7897164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.868 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.868 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.869 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.869 01:11:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.870 01:11:07 -- setup/common.sh@33 -- # echo 0 00:03:15.870 01:11:07 -- setup/common.sh@33 -- # return 0 00:03:15.870 01:11:07 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.870 01:11:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.870 nr_hugepages=1024 00:03:15.870 01:11:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.870 resv_hugepages=0 00:03:15.870 01:11:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.870 surplus_hugepages=0 00:03:15.870 01:11:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.870 anon_hugepages=0 00:03:15.870 01:11:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.870 01:11:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.870 01:11:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.870 01:11:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.870 01:11:07 -- setup/common.sh@18 -- # local node= 00:03:15.870 01:11:07 -- setup/common.sh@19 -- # local var val 00:03:15.870 01:11:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.870 01:11:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.870 01:11:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.870 01:11:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.870 01:11:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.870 01:11:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44767020 kB' 'MemAvailable: 49297068 kB' 'Buffers: 2704 kB' 'Cached: 11214064 kB' 'SwapCached: 0 kB' 'Active: 7220436 kB' 'Inactive: 4516068 kB' 'Active(anon): 6824812 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523000 kB' 'Mapped: 207784 kB' 'Shmem: 6305076 kB' 'KReclaimable: 224392 kB' 'Slab: 596888 kB' 'SReclaimable: 224392 kB' 'SUnreclaim: 372496 kB' 'KernelStack: 12816 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7897180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.870 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.870 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.871 01:11:07 -- setup/common.sh@33 -- # echo 1024 00:03:15.871 01:11:07 -- setup/common.sh@33 -- # return 0 00:03:15.871 01:11:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.871 01:11:07 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.871 01:11:07 -- setup/hugepages.sh@27 -- # local node 00:03:15.871 01:11:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.871 01:11:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.871 01:11:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.871 01:11:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.871 01:11:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.871 01:11:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.871 01:11:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.871 01:11:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.871 01:11:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.871 01:11:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.871 01:11:07 -- setup/common.sh@18 -- # local node=0 00:03:15.871 01:11:07 -- setup/common.sh@19 -- # local var val 00:03:15.871 01:11:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.871 01:11:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.871 01:11:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.871 01:11:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.871 01:11:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.871 01:11:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.871 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.871 01:11:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21394092 kB' 'MemUsed: 11482848 kB' 'SwapCached: 0 kB' 'Active: 4764460 kB' 'Inactive: 3428384 kB' 'Active(anon): 4492528 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3428384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8081980 kB' 'Mapped: 85064 kB' 'AnonPages: 114048 kB' 'Shmem: 4381664 kB' 'KernelStack: 6872 kB' 'PageTables: 3008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91252 kB' 'Slab: 305120 kB' 'SReclaimable: 91252 kB' 'SUnreclaim: 213868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # continue 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.872 01:11:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.872 01:11:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.872 01:11:07 -- setup/common.sh@33 -- # echo 0 00:03:15.872 01:11:07 -- setup/common.sh@33 -- # return 0 00:03:15.872 01:11:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.872 01:11:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.872 01:11:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.872 01:11:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.872 01:11:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.872 node0=1024 expecting 1024 00:03:15.872 01:11:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.872 00:03:15.872 real 0m2.831s 00:03:15.872 user 0m1.204s 00:03:15.872 sys 0m1.557s 00:03:15.872 01:11:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.872 01:11:07 -- common/autotest_common.sh@10 -- # set +x 00:03:15.872 ************************************ 00:03:15.873 END TEST no_shrink_alloc 00:03:15.873 ************************************ 00:03:15.873 01:11:07 -- setup/hugepages.sh@217 -- # clear_hp 00:03:15.873 01:11:07 -- setup/hugepages.sh@37 -- # local node hp 00:03:15.873 01:11:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.873 01:11:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.873 01:11:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:15.873 01:11:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.873 01:11:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:15.873 01:11:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.873 01:11:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.873 01:11:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:15.873 01:11:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.873 01:11:07 -- setup/hugepages.sh@41 -- # echo 0 00:03:15.873 01:11:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:15.873 01:11:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:15.873 00:03:15.873 real 0m11.263s 00:03:15.873 user 0m4.312s 00:03:15.873 sys 0m5.902s 00:03:15.873 01:11:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.873 01:11:07 -- common/autotest_common.sh@10 -- # set +x 00:03:15.873 ************************************ 00:03:15.873 END TEST hugepages 00:03:15.873 ************************************ 00:03:16.132 01:11:07 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:16.132 01:11:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:16.132 01:11:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.132 01:11:07 -- common/autotest_common.sh@10 -- # set +x 00:03:16.132 ************************************ 00:03:16.132 START TEST driver 00:03:16.132 ************************************ 00:03:16.132 01:11:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:16.132 * Looking for test storage... 00:03:16.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.132 01:11:07 -- setup/driver.sh@68 -- # setup reset 00:03:16.132 01:11:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.132 01:11:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.666 01:11:10 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:18.666 01:11:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:18.666 01:11:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:18.666 01:11:10 -- common/autotest_common.sh@10 -- # set +x 00:03:18.666 ************************************ 00:03:18.666 START TEST guess_driver 00:03:18.666 ************************************ 00:03:18.666 01:11:10 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:18.666 01:11:10 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:18.666 01:11:10 -- setup/driver.sh@47 -- # local fail=0 00:03:18.666 01:11:10 -- setup/driver.sh@49 -- # pick_driver 00:03:18.666 01:11:10 -- setup/driver.sh@36 -- # vfio 00:03:18.666 01:11:10 -- setup/driver.sh@21 -- # local iommu_grups 00:03:18.666 01:11:10 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:18.666 01:11:10 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:18.666 01:11:10 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:18.666 01:11:10 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:18.666 01:11:10 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:18.666 01:11:10 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:18.666 01:11:10 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:18.666 01:11:10 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:18.666 01:11:10 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:18.666 01:11:10 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:18.666 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:18.666 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:18.666 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:18.666 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:18.666 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:18.666 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:18.666 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:18.666 01:11:10 -- setup/driver.sh@30 -- # return 0 00:03:18.666 01:11:10 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:18.666 01:11:10 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:18.666 01:11:10 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:18.666 01:11:10 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:18.666 Looking for driver=vfio-pci 00:03:18.666 01:11:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.666 01:11:10 -- setup/driver.sh@45 -- # setup output config 00:03:18.666 01:11:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.666 01:11:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.043 01:11:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.043 01:11:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.043 01:11:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.984 01:11:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.984 01:11:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.984 01:11:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.984 01:11:12 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:20.984 01:11:12 -- setup/driver.sh@65 -- # setup reset 00:03:20.984 01:11:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.984 01:11:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.515 00:03:23.515 real 0m4.824s 00:03:23.515 user 0m1.156s 00:03:23.515 sys 0m1.815s 00:03:23.515 01:11:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.515 01:11:15 -- common/autotest_common.sh@10 -- # set +x 00:03:23.515 ************************************ 00:03:23.515 END TEST guess_driver 00:03:23.515 ************************************ 00:03:23.515 00:03:23.515 real 0m7.439s 00:03:23.515 user 0m1.726s 00:03:23.515 sys 0m2.888s 00:03:23.515 01:11:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.515 01:11:15 -- common/autotest_common.sh@10 -- # set +x 00:03:23.515 ************************************ 00:03:23.515 END TEST driver 00:03:23.515 ************************************ 00:03:23.515 01:11:15 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:23.515 01:11:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:23.515 01:11:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:23.515 01:11:15 -- common/autotest_common.sh@10 -- # set +x 00:03:23.515 ************************************ 00:03:23.515 START TEST devices 00:03:23.515 ************************************ 00:03:23.515 01:11:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:23.515 * Looking for test storage... 00:03:23.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:23.515 01:11:15 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:23.515 01:11:15 -- setup/devices.sh@192 -- # setup reset 00:03:23.515 01:11:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.515 01:11:15 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.896 01:11:16 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:24.896 01:11:16 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:24.896 01:11:16 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:24.896 01:11:16 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:24.896 01:11:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:24.896 01:11:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:24.896 01:11:16 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:24.896 01:11:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.896 01:11:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:24.896 01:11:16 -- setup/devices.sh@196 -- # blocks=() 00:03:24.896 01:11:16 -- setup/devices.sh@196 -- # declare -a blocks 00:03:24.896 01:11:16 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:24.896 01:11:16 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:24.896 01:11:16 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:24.896 01:11:16 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:24.896 01:11:16 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:24.896 01:11:16 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:24.896 01:11:16 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:24.896 01:11:16 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:24.896 01:11:16 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:24.896 01:11:16 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:24.896 01:11:16 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:24.896 No valid GPT data, bailing 00:03:24.896 01:11:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:24.896 01:11:16 -- scripts/common.sh@393 -- # pt= 00:03:24.896 01:11:16 -- scripts/common.sh@394 -- # return 1 00:03:24.896 01:11:16 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:24.896 01:11:16 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:24.896 01:11:16 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:24.896 01:11:16 -- setup/common.sh@80 -- # echo 1000204886016 00:03:24.896 01:11:16 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:24.896 01:11:16 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:24.896 01:11:16 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:24.896 01:11:16 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:24.896 01:11:16 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:24.896 01:11:16 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:24.896 01:11:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:24.896 01:11:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:24.896 01:11:16 -- common/autotest_common.sh@10 -- # set +x 00:03:24.896 ************************************ 00:03:24.896 START TEST nvme_mount 00:03:24.896 ************************************ 00:03:24.896 01:11:16 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:24.896 01:11:16 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:24.896 01:11:16 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:24.896 01:11:16 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.896 01:11:16 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.896 01:11:16 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:24.896 01:11:16 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:24.896 01:11:16 -- setup/common.sh@40 -- # local part_no=1 00:03:24.896 01:11:16 -- setup/common.sh@41 -- # local size=1073741824 00:03:24.896 01:11:16 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:24.896 01:11:16 -- setup/common.sh@44 -- # parts=() 00:03:24.896 01:11:16 -- setup/common.sh@44 -- # local parts 00:03:24.896 01:11:16 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:24.896 01:11:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.896 01:11:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:24.896 01:11:16 -- setup/common.sh@46 -- # (( part++ )) 00:03:24.896 01:11:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.896 01:11:16 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:24.896 01:11:16 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:24.896 01:11:16 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:26.278 Creating new GPT entries in memory. 00:03:26.278 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:26.278 other utilities. 00:03:26.278 01:11:17 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:26.278 01:11:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:26.278 01:11:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:26.278 01:11:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:26.278 01:11:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:27.214 Creating new GPT entries in memory. 00:03:27.214 The operation has completed successfully. 00:03:27.214 01:11:18 -- setup/common.sh@57 -- # (( part++ )) 00:03:27.214 01:11:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:27.215 01:11:18 -- setup/common.sh@62 -- # wait 500709 00:03:27.215 01:11:18 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.215 01:11:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:27.215 01:11:18 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.215 01:11:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:27.215 01:11:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:27.215 01:11:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.215 01:11:18 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.215 01:11:18 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:27.215 01:11:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:27.215 01:11:18 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.215 01:11:18 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.215 01:11:18 -- setup/devices.sh@53 -- # local found=0 00:03:27.215 01:11:18 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:27.215 01:11:18 -- setup/devices.sh@56 -- # : 00:03:27.215 01:11:18 -- setup/devices.sh@59 -- # local pci status 00:03:27.215 01:11:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.215 01:11:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:27.215 01:11:18 -- setup/devices.sh@47 -- # setup output config 00:03:27.215 01:11:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.215 01:11:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:28.198 01:11:19 -- setup/devices.sh@63 -- # found=1 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.198 01:11:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.198 01:11:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:28.198 01:11:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:28.198 01:11:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.457 01:11:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.457 01:11:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.457 01:11:19 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:28.457 01:11:19 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.457 01:11:19 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.457 01:11:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:28.457 01:11:19 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:28.457 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:28.457 01:11:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:28.457 01:11:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:28.716 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:28.716 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:28.716 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:28.716 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:28.716 01:11:20 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:28.716 01:11:20 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:28.716 01:11:20 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.716 01:11:20 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:28.716 01:11:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:28.716 01:11:20 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.716 01:11:20 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.716 01:11:20 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:28.716 01:11:20 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:28.716 01:11:20 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.716 01:11:20 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.716 01:11:20 -- setup/devices.sh@53 -- # local found=0 00:03:28.716 01:11:20 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.716 01:11:20 -- setup/devices.sh@56 -- # : 00:03:28.716 01:11:20 -- setup/devices.sh@59 -- # local pci status 00:03:28.716 01:11:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.716 01:11:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:28.716 01:11:20 -- setup/devices.sh@47 -- # setup output config 00:03:28.716 01:11:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.716 01:11:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:29.654 01:11:21 -- setup/devices.sh@63 -- # found=1 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.654 01:11:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.654 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.912 01:11:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:29.912 01:11:21 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:29.912 01:11:21 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.912 01:11:21 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:29.912 01:11:21 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:29.912 01:11:21 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.912 01:11:21 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:29.912 01:11:21 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:29.912 01:11:21 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:29.912 01:11:21 -- setup/devices.sh@50 -- # local mount_point= 00:03:29.912 01:11:21 -- setup/devices.sh@51 -- # local test_file= 00:03:29.912 01:11:21 -- setup/devices.sh@53 -- # local found=0 00:03:29.912 01:11:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:29.912 01:11:21 -- setup/devices.sh@59 -- # local pci status 00:03:29.912 01:11:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.912 01:11:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:29.912 01:11:21 -- setup/devices.sh@47 -- # setup output config 00:03:29.912 01:11:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.912 01:11:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:31.291 01:11:22 -- setup/devices.sh@63 -- # found=1 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.291 01:11:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.291 01:11:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:31.291 01:11:22 -- setup/devices.sh@68 -- # return 0 00:03:31.291 01:11:22 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:31.291 01:11:22 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.291 01:11:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:31.291 01:11:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:31.291 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:31.291 00:03:31.292 real 0m6.237s 00:03:31.292 user 0m1.438s 00:03:31.292 sys 0m2.398s 00:03:31.292 01:11:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.292 01:11:22 -- common/autotest_common.sh@10 -- # set +x 00:03:31.292 ************************************ 00:03:31.292 END TEST nvme_mount 00:03:31.292 ************************************ 00:03:31.292 01:11:22 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:31.292 01:11:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.292 01:11:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.292 01:11:22 -- common/autotest_common.sh@10 -- # set +x 00:03:31.292 ************************************ 00:03:31.292 START TEST dm_mount 00:03:31.292 ************************************ 00:03:31.292 01:11:22 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:31.292 01:11:22 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:31.292 01:11:22 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:31.292 01:11:22 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:31.292 01:11:22 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:31.292 01:11:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:31.292 01:11:22 -- setup/common.sh@40 -- # local part_no=2 00:03:31.292 01:11:22 -- setup/common.sh@41 -- # local size=1073741824 00:03:31.292 01:11:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:31.292 01:11:22 -- setup/common.sh@44 -- # parts=() 00:03:31.292 01:11:22 -- setup/common.sh@44 -- # local parts 00:03:31.292 01:11:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:31.292 01:11:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.292 01:11:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.292 01:11:22 -- setup/common.sh@46 -- # (( part++ )) 00:03:31.292 01:11:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.292 01:11:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.292 01:11:22 -- setup/common.sh@46 -- # (( part++ )) 00:03:31.292 01:11:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.292 01:11:22 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:31.292 01:11:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:31.292 01:11:22 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:32.228 Creating new GPT entries in memory. 00:03:32.228 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:32.228 other utilities. 00:03:32.228 01:11:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:32.228 01:11:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.228 01:11:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.228 01:11:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.228 01:11:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:33.607 Creating new GPT entries in memory. 00:03:33.607 The operation has completed successfully. 00:03:33.607 01:11:24 -- setup/common.sh@57 -- # (( part++ )) 00:03:33.607 01:11:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.607 01:11:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.607 01:11:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.607 01:11:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:34.544 The operation has completed successfully. 00:03:34.544 01:11:25 -- setup/common.sh@57 -- # (( part++ )) 00:03:34.544 01:11:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.544 01:11:25 -- setup/common.sh@62 -- # wait 503170 00:03:34.544 01:11:25 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:34.544 01:11:25 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.544 01:11:25 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.544 01:11:25 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:34.544 01:11:25 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:34.544 01:11:25 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.544 01:11:25 -- setup/devices.sh@161 -- # break 00:03:34.544 01:11:25 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.544 01:11:25 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:34.544 01:11:25 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:34.544 01:11:25 -- setup/devices.sh@166 -- # dm=dm-0 00:03:34.544 01:11:25 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:34.544 01:11:25 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:34.544 01:11:25 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.544 01:11:25 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:34.544 01:11:25 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.544 01:11:25 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.544 01:11:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:34.544 01:11:26 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.544 01:11:26 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.544 01:11:26 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:34.544 01:11:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:34.544 01:11:26 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.544 01:11:26 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.544 01:11:26 -- setup/devices.sh@53 -- # local found=0 00:03:34.544 01:11:26 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:34.544 01:11:26 -- setup/devices.sh@56 -- # : 00:03:34.544 01:11:26 -- setup/devices.sh@59 -- # local pci status 00:03:34.544 01:11:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.544 01:11:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:34.544 01:11:26 -- setup/devices.sh@47 -- # setup output config 00:03:34.544 01:11:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.544 01:11:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.477 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.477 01:11:27 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:35.477 01:11:27 -- setup/devices.sh@63 -- # found=1 00:03:35.477 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.477 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.478 01:11:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.478 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.737 01:11:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:35.737 01:11:27 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:35.737 01:11:27 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.737 01:11:27 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:35.737 01:11:27 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:35.737 01:11:27 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.737 01:11:27 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:35.737 01:11:27 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:35.737 01:11:27 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:35.737 01:11:27 -- setup/devices.sh@50 -- # local mount_point= 00:03:35.737 01:11:27 -- setup/devices.sh@51 -- # local test_file= 00:03:35.737 01:11:27 -- setup/devices.sh@53 -- # local found=0 00:03:35.737 01:11:27 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:35.737 01:11:27 -- setup/devices.sh@59 -- # local pci status 00:03:35.737 01:11:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.737 01:11:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:35.737 01:11:27 -- setup/devices.sh@47 -- # setup output config 00:03:35.737 01:11:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.737 01:11:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:36.672 01:11:28 -- setup/devices.sh@63 -- # found=1 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.672 01:11:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.672 01:11:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.930 01:11:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.930 01:11:28 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:36.930 01:11:28 -- setup/devices.sh@68 -- # return 0 00:03:36.930 01:11:28 -- setup/devices.sh@187 -- # cleanup_dm 00:03:36.930 01:11:28 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.930 01:11:28 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:36.931 01:11:28 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:36.931 01:11:28 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:36.931 01:11:28 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:36.931 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:36.931 01:11:28 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:36.931 01:11:28 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:36.931 00:03:36.931 real 0m5.721s 00:03:36.931 user 0m0.991s 00:03:36.931 sys 0m1.613s 00:03:36.931 01:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.931 01:11:28 -- common/autotest_common.sh@10 -- # set +x 00:03:36.931 ************************************ 00:03:36.931 END TEST dm_mount 00:03:36.931 ************************************ 00:03:36.931 01:11:28 -- setup/devices.sh@1 -- # cleanup 00:03:36.931 01:11:28 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:36.931 01:11:28 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.931 01:11:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:36.931 01:11:28 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:36.931 01:11:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:36.931 01:11:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:37.188 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:37.188 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:37.188 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:37.188 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:37.188 01:11:28 -- setup/devices.sh@12 -- # cleanup_dm 00:03:37.188 01:11:28 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.188 01:11:28 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:37.188 01:11:28 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:37.188 01:11:28 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:37.188 01:11:28 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:37.188 01:11:28 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:37.188 00:03:37.188 real 0m13.833s 00:03:37.188 user 0m3.066s 00:03:37.188 sys 0m5.014s 00:03:37.188 01:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.188 01:11:28 -- common/autotest_common.sh@10 -- # set +x 00:03:37.188 ************************************ 00:03:37.188 END TEST devices 00:03:37.188 ************************************ 00:03:37.446 00:03:37.447 real 0m43.137s 00:03:37.447 user 0m12.377s 00:03:37.447 sys 0m19.184s 00:03:37.447 01:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.447 01:11:28 -- common/autotest_common.sh@10 -- # set +x 00:03:37.447 ************************************ 00:03:37.447 END TEST setup.sh 00:03:37.447 ************************************ 00:03:37.447 01:11:28 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:38.383 Hugepages 00:03:38.383 node hugesize free / total 00:03:38.383 node0 1048576kB 0 / 0 00:03:38.383 node0 2048kB 2048 / 2048 00:03:38.383 node1 1048576kB 0 / 0 00:03:38.383 node1 2048kB 0 / 0 00:03:38.383 00:03:38.383 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.383 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:38.383 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:38.383 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:38.383 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:38.383 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:38.383 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:38.383 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:38.383 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:38.383 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:38.383 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:38.383 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:38.383 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:38.383 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:38.383 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:38.383 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:38.383 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:38.383 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:38.383 01:11:30 -- spdk/autotest.sh@141 -- # uname -s 00:03:38.383 01:11:30 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:38.383 01:11:30 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:38.383 01:11:30 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.764 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:39.764 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:39.764 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:39.764 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:39.764 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:39.764 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:39.764 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:39.764 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:39.764 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:39.764 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:39.764 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:39.764 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:39.764 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:39.764 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:39.764 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:39.764 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:40.705 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:40.705 01:11:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:42.083 01:11:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:42.083 01:11:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:42.083 01:11:33 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:42.083 01:11:33 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:42.083 01:11:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:42.083 01:11:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:42.083 01:11:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.083 01:11:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:42.083 01:11:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.083 01:11:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:42.083 01:11:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:42.083 01:11:33 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.027 Waiting for block devices as requested 00:03:43.027 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:43.285 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:43.285 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:43.285 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:43.543 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:43.543 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:43.543 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:43.543 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:43.802 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:43.802 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:43.802 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:43.803 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:44.063 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:44.063 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:44.063 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:44.063 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:44.322 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:44.322 01:11:35 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:44.322 01:11:35 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:44.322 01:11:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:44.322 01:11:35 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:44.322 01:11:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:44.322 01:11:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:44.322 01:11:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:44.322 01:11:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:44.322 01:11:35 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:44.322 01:11:35 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:44.322 01:11:36 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:44.322 01:11:36 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:44.322 01:11:36 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:44.322 01:11:36 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:03:44.322 01:11:36 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:44.322 01:11:36 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:44.322 01:11:36 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:44.322 01:11:36 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:44.322 01:11:36 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:44.322 01:11:36 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:44.322 01:11:36 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:44.322 01:11:36 -- common/autotest_common.sh@1542 -- # continue 00:03:44.322 01:11:36 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:44.322 01:11:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:44.322 01:11:36 -- common/autotest_common.sh@10 -- # set +x 00:03:44.322 01:11:36 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:44.322 01:11:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:44.322 01:11:36 -- common/autotest_common.sh@10 -- # set +x 00:03:44.322 01:11:36 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.730 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:45.730 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:45.730 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:45.730 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:45.730 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:45.730 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:45.730 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:45.730 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:45.730 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:45.730 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:45.730 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:45.730 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:45.730 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:45.730 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:45.730 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:45.730 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:46.669 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:46.669 01:11:38 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:46.669 01:11:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:46.669 01:11:38 -- common/autotest_common.sh@10 -- # set +x 00:03:46.669 01:11:38 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:46.669 01:11:38 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:46.669 01:11:38 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:46.669 01:11:38 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:46.669 01:11:38 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:46.669 01:11:38 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:46.669 01:11:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:46.669 01:11:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:46.669 01:11:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.669 01:11:38 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:46.669 01:11:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:46.928 01:11:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:46.928 01:11:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:46.928 01:11:38 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:46.928 01:11:38 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:46.928 01:11:38 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:03:46.928 01:11:38 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:46.928 01:11:38 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:03:46.928 01:11:38 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:03:46.928 01:11:38 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:03:46.928 01:11:38 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=508471 00:03:46.928 01:11:38 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.928 01:11:38 -- common/autotest_common.sh@1583 -- # waitforlisten 508471 00:03:46.928 01:11:38 -- common/autotest_common.sh@819 -- # '[' -z 508471 ']' 00:03:46.928 01:11:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.928 01:11:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:46.928 01:11:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.928 01:11:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:46.928 01:11:38 -- common/autotest_common.sh@10 -- # set +x 00:03:46.928 [2024-07-27 01:11:38.531465] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:03:46.928 [2024-07-27 01:11:38.531546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid508471 ] 00:03:46.928 EAL: No free 2048 kB hugepages reported on node 1 00:03:46.928 [2024-07-27 01:11:38.587704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.186 [2024-07-27 01:11:38.696945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:47.186 [2024-07-27 01:11:38.697125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.752 01:11:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:47.752 01:11:39 -- common/autotest_common.sh@852 -- # return 0 00:03:47.752 01:11:39 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:47.752 01:11:39 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:47.752 01:11:39 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:51.037 nvme0n1 00:03:51.037 01:11:42 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:51.037 [2024-07-27 01:11:42.768473] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:51.037 [2024-07-27 01:11:42.768521] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:51.037 request: 00:03:51.037 { 00:03:51.037 "nvme_ctrlr_name": "nvme0", 00:03:51.037 "password": "test", 00:03:51.037 "method": "bdev_nvme_opal_revert", 00:03:51.037 "req_id": 1 00:03:51.037 } 00:03:51.037 Got JSON-RPC error response 00:03:51.037 response: 00:03:51.037 { 00:03:51.037 "code": -32603, 00:03:51.037 "message": "Internal error" 00:03:51.037 } 00:03:51.037 01:11:42 -- common/autotest_common.sh@1589 -- # true 00:03:51.037 01:11:42 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:51.037 01:11:42 -- common/autotest_common.sh@1593 -- # killprocess 508471 00:03:51.037 01:11:42 -- common/autotest_common.sh@926 -- # '[' -z 508471 ']' 00:03:51.037 01:11:42 -- common/autotest_common.sh@930 -- # kill -0 508471 00:03:51.037 01:11:42 -- common/autotest_common.sh@931 -- # uname 00:03:51.037 01:11:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:51.037 01:11:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 508471 00:03:51.296 01:11:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:51.296 01:11:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:51.296 01:11:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 508471' 00:03:51.296 killing process with pid 508471 00:03:51.296 01:11:42 -- common/autotest_common.sh@945 -- # kill 508471 00:03:51.296 01:11:42 -- common/autotest_common.sh@950 -- # wait 508471 00:03:53.193 01:11:44 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:03:53.193 01:11:44 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:53.193 01:11:44 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:53.193 01:11:44 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:53.193 01:11:44 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:53.193 01:11:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:53.193 01:11:44 -- common/autotest_common.sh@10 -- # set +x 00:03:53.193 01:11:44 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:53.193 01:11:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.193 01:11:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.193 01:11:44 -- common/autotest_common.sh@10 -- # set +x 00:03:53.193 ************************************ 00:03:53.193 START TEST env 00:03:53.193 ************************************ 00:03:53.193 01:11:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:53.193 * Looking for test storage... 00:03:53.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:53.193 01:11:44 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:53.193 01:11:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.193 01:11:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.193 01:11:44 -- common/autotest_common.sh@10 -- # set +x 00:03:53.193 ************************************ 00:03:53.193 START TEST env_memory 00:03:53.193 ************************************ 00:03:53.193 01:11:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:53.193 00:03:53.193 00:03:53.193 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.193 http://cunit.sourceforge.net/ 00:03:53.193 00:03:53.193 00:03:53.193 Suite: memory 00:03:53.193 Test: alloc and free memory map ...[2024-07-27 01:11:44.721614] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:53.193 passed 00:03:53.193 Test: mem map translation ...[2024-07-27 01:11:44.741666] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:53.193 [2024-07-27 01:11:44.741688] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:53.193 [2024-07-27 01:11:44.741744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:53.193 [2024-07-27 01:11:44.741757] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:53.193 passed 00:03:53.193 Test: mem map registration ...[2024-07-27 01:11:44.782461] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:53.193 [2024-07-27 01:11:44.782481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:53.193 passed 00:03:53.193 Test: mem map adjacent registrations ...passed 00:03:53.193 00:03:53.193 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.193 suites 1 1 n/a 0 0 00:03:53.193 tests 4 4 4 0 0 00:03:53.193 asserts 152 152 152 0 n/a 00:03:53.193 00:03:53.193 Elapsed time = 0.141 seconds 00:03:53.193 00:03:53.193 real 0m0.149s 00:03:53.193 user 0m0.143s 00:03:53.193 sys 0m0.006s 00:03:53.193 01:11:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.193 01:11:44 -- common/autotest_common.sh@10 -- # set +x 00:03:53.193 ************************************ 00:03:53.193 END TEST env_memory 00:03:53.193 ************************************ 00:03:53.193 01:11:44 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:53.193 01:11:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.193 01:11:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.193 01:11:44 -- common/autotest_common.sh@10 -- # set +x 00:03:53.193 ************************************ 00:03:53.193 START TEST env_vtophys 00:03:53.193 ************************************ 00:03:53.193 01:11:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:53.193 EAL: lib.eal log level changed from notice to debug 00:03:53.193 EAL: Detected lcore 0 as core 0 on socket 0 00:03:53.193 EAL: Detected lcore 1 as core 1 on socket 0 00:03:53.193 EAL: Detected lcore 2 as core 2 on socket 0 00:03:53.193 EAL: Detected lcore 3 as core 3 on socket 0 00:03:53.193 EAL: Detected lcore 4 as core 4 on socket 0 00:03:53.193 EAL: Detected lcore 5 as core 5 on socket 0 00:03:53.193 EAL: Detected lcore 6 as core 8 on socket 0 00:03:53.193 EAL: Detected lcore 7 as core 9 on socket 0 00:03:53.193 EAL: Detected lcore 8 as core 10 on socket 0 00:03:53.193 EAL: Detected lcore 9 as core 11 on socket 0 00:03:53.193 EAL: Detected lcore 10 as core 12 on socket 0 00:03:53.193 EAL: Detected lcore 11 as core 13 on socket 0 00:03:53.193 EAL: Detected lcore 12 as core 0 on socket 1 00:03:53.193 EAL: Detected lcore 13 as core 1 on socket 1 00:03:53.193 EAL: Detected lcore 14 as core 2 on socket 1 00:03:53.193 EAL: Detected lcore 15 as core 3 on socket 1 00:03:53.193 EAL: Detected lcore 16 as core 4 on socket 1 00:03:53.193 EAL: Detected lcore 17 as core 5 on socket 1 00:03:53.193 EAL: Detected lcore 18 as core 8 on socket 1 00:03:53.193 EAL: Detected lcore 19 as core 9 on socket 1 00:03:53.193 EAL: Detected lcore 20 as core 10 on socket 1 00:03:53.193 EAL: Detected lcore 21 as core 11 on socket 1 00:03:53.193 EAL: Detected lcore 22 as core 12 on socket 1 00:03:53.193 EAL: Detected lcore 23 as core 13 on socket 1 00:03:53.193 EAL: Detected lcore 24 as core 0 on socket 0 00:03:53.193 EAL: Detected lcore 25 as core 1 on socket 0 00:03:53.193 EAL: Detected lcore 26 as core 2 on socket 0 00:03:53.193 EAL: Detected lcore 27 as core 3 on socket 0 00:03:53.193 EAL: Detected lcore 28 as core 4 on socket 0 00:03:53.193 EAL: Detected lcore 29 as core 5 on socket 0 00:03:53.193 EAL: Detected lcore 30 as core 8 on socket 0 00:03:53.193 EAL: Detected lcore 31 as core 9 on socket 0 00:03:53.193 EAL: Detected lcore 32 as core 10 on socket 0 00:03:53.193 EAL: Detected lcore 33 as core 11 on socket 0 00:03:53.193 EAL: Detected lcore 34 as core 12 on socket 0 00:03:53.193 EAL: Detected lcore 35 as core 13 on socket 0 00:03:53.193 EAL: Detected lcore 36 as core 0 on socket 1 00:03:53.193 EAL: Detected lcore 37 as core 1 on socket 1 00:03:53.193 EAL: Detected lcore 38 as core 2 on socket 1 00:03:53.193 EAL: Detected lcore 39 as core 3 on socket 1 00:03:53.193 EAL: Detected lcore 40 as core 4 on socket 1 00:03:53.193 EAL: Detected lcore 41 as core 5 on socket 1 00:03:53.193 EAL: Detected lcore 42 as core 8 on socket 1 00:03:53.193 EAL: Detected lcore 43 as core 9 on socket 1 00:03:53.193 EAL: Detected lcore 44 as core 10 on socket 1 00:03:53.193 EAL: Detected lcore 45 as core 11 on socket 1 00:03:53.193 EAL: Detected lcore 46 as core 12 on socket 1 00:03:53.193 EAL: Detected lcore 47 as core 13 on socket 1 00:03:53.193 EAL: Maximum logical cores by configuration: 128 00:03:53.193 EAL: Detected CPU lcores: 48 00:03:53.193 EAL: Detected NUMA nodes: 2 00:03:53.193 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:53.193 EAL: Detected shared linkage of DPDK 00:03:53.193 EAL: No shared files mode enabled, IPC will be disabled 00:03:53.193 EAL: Bus pci wants IOVA as 'DC' 00:03:53.193 EAL: Buses did not request a specific IOVA mode. 00:03:53.193 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:53.193 EAL: Selected IOVA mode 'VA' 00:03:53.193 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.193 EAL: Probing VFIO support... 00:03:53.193 EAL: IOMMU type 1 (Type 1) is supported 00:03:53.193 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:53.193 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:53.193 EAL: VFIO support initialized 00:03:53.193 EAL: Ask a virtual area of 0x2e000 bytes 00:03:53.193 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:53.193 EAL: Setting up physically contiguous memory... 00:03:53.193 EAL: Setting maximum number of open files to 524288 00:03:53.193 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:53.193 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:53.193 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:53.193 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.193 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:53.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.193 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.193 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:53.193 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:53.193 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.193 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:53.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.193 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.193 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:53.193 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:53.193 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.193 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:53.194 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.194 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.194 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:53.194 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:53.194 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.194 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:53.194 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.194 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.194 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:53.194 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:53.194 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:53.194 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.194 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:53.194 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.194 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.194 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:53.194 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:53.194 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.194 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:53.194 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.194 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.194 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:53.194 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:53.194 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.194 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:53.194 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.194 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.194 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:53.194 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:53.194 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.194 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:53.194 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.194 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.194 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:53.194 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:53.194 EAL: Hugepages will be freed exactly as allocated. 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: TSC frequency is ~2700000 KHz 00:03:53.194 EAL: Main lcore 0 is ready (tid=7fc4aa953a00;cpuset=[0]) 00:03:53.194 EAL: Trying to obtain current memory policy. 00:03:53.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.194 EAL: Restoring previous memory policy: 0 00:03:53.194 EAL: request: mp_malloc_sync 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: Heap on socket 0 was expanded by 2MB 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:53.194 EAL: Mem event callback 'spdk:(nil)' registered 00:03:53.194 00:03:53.194 00:03:53.194 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.194 http://cunit.sourceforge.net/ 00:03:53.194 00:03:53.194 00:03:53.194 Suite: components_suite 00:03:53.194 Test: vtophys_malloc_test ...passed 00:03:53.194 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:53.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.194 EAL: Restoring previous memory policy: 4 00:03:53.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.194 EAL: request: mp_malloc_sync 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: Heap on socket 0 was expanded by 4MB 00:03:53.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.194 EAL: request: mp_malloc_sync 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: Heap on socket 0 was shrunk by 4MB 00:03:53.194 EAL: Trying to obtain current memory policy. 00:03:53.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.194 EAL: Restoring previous memory policy: 4 00:03:53.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.194 EAL: request: mp_malloc_sync 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: Heap on socket 0 was expanded by 6MB 00:03:53.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.194 EAL: request: mp_malloc_sync 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: Heap on socket 0 was shrunk by 6MB 00:03:53.194 EAL: Trying to obtain current memory policy. 00:03:53.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.194 EAL: Restoring previous memory policy: 4 00:03:53.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.194 EAL: request: mp_malloc_sync 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: Heap on socket 0 was expanded by 10MB 00:03:53.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.194 EAL: request: mp_malloc_sync 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: Heap on socket 0 was shrunk by 10MB 00:03:53.194 EAL: Trying to obtain current memory policy. 00:03:53.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.194 EAL: Restoring previous memory policy: 4 00:03:53.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.194 EAL: request: mp_malloc_sync 00:03:53.194 EAL: No shared files mode enabled, IPC is disabled 00:03:53.194 EAL: Heap on socket 0 was expanded by 18MB 00:03:53.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.452 EAL: request: mp_malloc_sync 00:03:53.452 EAL: No shared files mode enabled, IPC is disabled 00:03:53.452 EAL: Heap on socket 0 was shrunk by 18MB 00:03:53.452 EAL: Trying to obtain current memory policy. 00:03:53.452 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.452 EAL: Restoring previous memory policy: 4 00:03:53.452 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.452 EAL: request: mp_malloc_sync 00:03:53.452 EAL: No shared files mode enabled, IPC is disabled 00:03:53.452 EAL: Heap on socket 0 was expanded by 34MB 00:03:53.452 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.452 EAL: request: mp_malloc_sync 00:03:53.452 EAL: No shared files mode enabled, IPC is disabled 00:03:53.452 EAL: Heap on socket 0 was shrunk by 34MB 00:03:53.452 EAL: Trying to obtain current memory policy. 00:03:53.452 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.452 EAL: Restoring previous memory policy: 4 00:03:53.452 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.452 EAL: request: mp_malloc_sync 00:03:53.452 EAL: No shared files mode enabled, IPC is disabled 00:03:53.452 EAL: Heap on socket 0 was expanded by 66MB 00:03:53.452 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.452 EAL: request: mp_malloc_sync 00:03:53.452 EAL: No shared files mode enabled, IPC is disabled 00:03:53.452 EAL: Heap on socket 0 was shrunk by 66MB 00:03:53.452 EAL: Trying to obtain current memory policy. 00:03:53.452 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.452 EAL: Restoring previous memory policy: 4 00:03:53.452 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.452 EAL: request: mp_malloc_sync 00:03:53.452 EAL: No shared files mode enabled, IPC is disabled 00:03:53.452 EAL: Heap on socket 0 was expanded by 130MB 00:03:53.452 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.452 EAL: request: mp_malloc_sync 00:03:53.452 EAL: No shared files mode enabled, IPC is disabled 00:03:53.452 EAL: Heap on socket 0 was shrunk by 130MB 00:03:53.452 EAL: Trying to obtain current memory policy. 00:03:53.452 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.452 EAL: Restoring previous memory policy: 4 00:03:53.452 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.452 EAL: request: mp_malloc_sync 00:03:53.452 EAL: No shared files mode enabled, IPC is disabled 00:03:53.452 EAL: Heap on socket 0 was expanded by 258MB 00:03:53.452 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.709 EAL: request: mp_malloc_sync 00:03:53.709 EAL: No shared files mode enabled, IPC is disabled 00:03:53.709 EAL: Heap on socket 0 was shrunk by 258MB 00:03:53.709 EAL: Trying to obtain current memory policy. 00:03:53.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.709 EAL: Restoring previous memory policy: 4 00:03:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.709 EAL: request: mp_malloc_sync 00:03:53.709 EAL: No shared files mode enabled, IPC is disabled 00:03:53.709 EAL: Heap on socket 0 was expanded by 514MB 00:03:53.965 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.965 EAL: request: mp_malloc_sync 00:03:53.965 EAL: No shared files mode enabled, IPC is disabled 00:03:53.965 EAL: Heap on socket 0 was shrunk by 514MB 00:03:53.965 EAL: Trying to obtain current memory policy. 00:03:53.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.222 EAL: Restoring previous memory policy: 4 00:03:54.222 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.222 EAL: request: mp_malloc_sync 00:03:54.223 EAL: No shared files mode enabled, IPC is disabled 00:03:54.223 EAL: Heap on socket 0 was expanded by 1026MB 00:03:54.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.739 EAL: request: mp_malloc_sync 00:03:54.739 EAL: No shared files mode enabled, IPC is disabled 00:03:54.739 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:54.739 passed 00:03:54.739 00:03:54.739 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.739 suites 1 1 n/a 0 0 00:03:54.739 tests 2 2 2 0 0 00:03:54.739 asserts 497 497 497 0 n/a 00:03:54.739 00:03:54.739 Elapsed time = 1.378 seconds 00:03:54.739 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.739 EAL: request: mp_malloc_sync 00:03:54.739 EAL: No shared files mode enabled, IPC is disabled 00:03:54.739 EAL: Heap on socket 0 was shrunk by 2MB 00:03:54.739 EAL: No shared files mode enabled, IPC is disabled 00:03:54.739 EAL: No shared files mode enabled, IPC is disabled 00:03:54.739 EAL: No shared files mode enabled, IPC is disabled 00:03:54.739 00:03:54.739 real 0m1.495s 00:03:54.739 user 0m0.864s 00:03:54.739 sys 0m0.599s 00:03:54.739 01:11:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.739 01:11:46 -- common/autotest_common.sh@10 -- # set +x 00:03:54.739 ************************************ 00:03:54.739 END TEST env_vtophys 00:03:54.739 ************************************ 00:03:54.739 01:11:46 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:54.739 01:11:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:54.739 01:11:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.739 01:11:46 -- common/autotest_common.sh@10 -- # set +x 00:03:54.739 ************************************ 00:03:54.739 START TEST env_pci 00:03:54.739 ************************************ 00:03:54.739 01:11:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:54.739 00:03:54.739 00:03:54.739 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.739 http://cunit.sourceforge.net/ 00:03:54.739 00:03:54.739 00:03:54.739 Suite: pci 00:03:54.739 Test: pci_hook ...[2024-07-27 01:11:46.394381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 509509 has claimed it 00:03:54.739 EAL: Cannot find device (10000:00:01.0) 00:03:54.739 EAL: Failed to attach device on primary process 00:03:54.739 passed 00:03:54.739 00:03:54.739 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.739 suites 1 1 n/a 0 0 00:03:54.739 tests 1 1 1 0 0 00:03:54.739 asserts 25 25 25 0 n/a 00:03:54.739 00:03:54.739 Elapsed time = 0.022 seconds 00:03:54.739 00:03:54.739 real 0m0.036s 00:03:54.739 user 0m0.013s 00:03:54.739 sys 0m0.023s 00:03:54.739 01:11:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.739 01:11:46 -- common/autotest_common.sh@10 -- # set +x 00:03:54.739 ************************************ 00:03:54.739 END TEST env_pci 00:03:54.739 ************************************ 00:03:54.739 01:11:46 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:54.739 01:11:46 -- env/env.sh@15 -- # uname 00:03:54.739 01:11:46 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:54.739 01:11:46 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:54.739 01:11:46 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:54.739 01:11:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:03:54.739 01:11:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.739 01:11:46 -- common/autotest_common.sh@10 -- # set +x 00:03:54.739 ************************************ 00:03:54.739 START TEST env_dpdk_post_init 00:03:54.739 ************************************ 00:03:54.739 01:11:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:54.739 EAL: Detected CPU lcores: 48 00:03:54.739 EAL: Detected NUMA nodes: 2 00:03:54.739 EAL: Detected shared linkage of DPDK 00:03:54.739 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:54.739 EAL: Selected IOVA mode 'VA' 00:03:54.739 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.739 EAL: VFIO support initialized 00:03:54.739 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:54.997 EAL: Using IOMMU type 1 (Type 1) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:54.997 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:55.933 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:59.214 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:59.214 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:59.214 Starting DPDK initialization... 00:03:59.214 Starting SPDK post initialization... 00:03:59.214 SPDK NVMe probe 00:03:59.214 Attaching to 0000:88:00.0 00:03:59.214 Attached to 0000:88:00.0 00:03:59.214 Cleaning up... 00:03:59.214 00:03:59.214 real 0m4.395s 00:03:59.214 user 0m3.290s 00:03:59.214 sys 0m0.156s 00:03:59.214 01:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.214 01:11:50 -- common/autotest_common.sh@10 -- # set +x 00:03:59.214 ************************************ 00:03:59.214 END TEST env_dpdk_post_init 00:03:59.214 ************************************ 00:03:59.214 01:11:50 -- env/env.sh@26 -- # uname 00:03:59.214 01:11:50 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:59.214 01:11:50 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.214 01:11:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.214 01:11:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.214 01:11:50 -- common/autotest_common.sh@10 -- # set +x 00:03:59.214 ************************************ 00:03:59.214 START TEST env_mem_callbacks 00:03:59.214 ************************************ 00:03:59.214 01:11:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.214 EAL: Detected CPU lcores: 48 00:03:59.214 EAL: Detected NUMA nodes: 2 00:03:59.214 EAL: Detected shared linkage of DPDK 00:03:59.214 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.214 EAL: Selected IOVA mode 'VA' 00:03:59.214 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.214 EAL: VFIO support initialized 00:03:59.214 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.214 00:03:59.214 00:03:59.214 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.214 http://cunit.sourceforge.net/ 00:03:59.214 00:03:59.214 00:03:59.214 Suite: memory 00:03:59.214 Test: test ... 00:03:59.214 register 0x200000200000 2097152 00:03:59.214 malloc 3145728 00:03:59.214 register 0x200000400000 4194304 00:03:59.214 buf 0x200000500000 len 3145728 PASSED 00:03:59.214 malloc 64 00:03:59.214 buf 0x2000004fff40 len 64 PASSED 00:03:59.214 malloc 4194304 00:03:59.214 register 0x200000800000 6291456 00:03:59.214 buf 0x200000a00000 len 4194304 PASSED 00:03:59.214 free 0x200000500000 3145728 00:03:59.214 free 0x2000004fff40 64 00:03:59.214 unregister 0x200000400000 4194304 PASSED 00:03:59.214 free 0x200000a00000 4194304 00:03:59.214 unregister 0x200000800000 6291456 PASSED 00:03:59.214 malloc 8388608 00:03:59.214 register 0x200000400000 10485760 00:03:59.214 buf 0x200000600000 len 8388608 PASSED 00:03:59.214 free 0x200000600000 8388608 00:03:59.214 unregister 0x200000400000 10485760 PASSED 00:03:59.214 passed 00:03:59.214 00:03:59.214 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.214 suites 1 1 n/a 0 0 00:03:59.214 tests 1 1 1 0 0 00:03:59.214 asserts 15 15 15 0 n/a 00:03:59.214 00:03:59.214 Elapsed time = 0.005 seconds 00:03:59.214 00:03:59.214 real 0m0.048s 00:03:59.214 user 0m0.010s 00:03:59.214 sys 0m0.038s 00:03:59.214 01:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.214 01:11:50 -- common/autotest_common.sh@10 -- # set +x 00:03:59.214 ************************************ 00:03:59.214 END TEST env_mem_callbacks 00:03:59.214 ************************************ 00:03:59.214 00:03:59.214 real 0m6.308s 00:03:59.215 user 0m4.397s 00:03:59.215 sys 0m0.955s 00:03:59.215 01:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.215 01:11:50 -- common/autotest_common.sh@10 -- # set +x 00:03:59.215 ************************************ 00:03:59.215 END TEST env 00:03:59.215 ************************************ 00:03:59.215 01:11:50 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:59.215 01:11:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.215 01:11:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.215 01:11:50 -- common/autotest_common.sh@10 -- # set +x 00:03:59.215 ************************************ 00:03:59.215 START TEST rpc 00:03:59.215 ************************************ 00:03:59.215 01:11:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:59.474 * Looking for test storage... 00:03:59.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.474 01:11:51 -- rpc/rpc.sh@65 -- # spdk_pid=510169 00:03:59.474 01:11:51 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:59.474 01:11:51 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.474 01:11:51 -- rpc/rpc.sh@67 -- # waitforlisten 510169 00:03:59.474 01:11:51 -- common/autotest_common.sh@819 -- # '[' -z 510169 ']' 00:03:59.474 01:11:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.474 01:11:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:59.474 01:11:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.474 01:11:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:59.474 01:11:51 -- common/autotest_common.sh@10 -- # set +x 00:03:59.474 [2024-07-27 01:11:51.060641] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:03:59.474 [2024-07-27 01:11:51.060720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510169 ] 00:03:59.474 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.474 [2024-07-27 01:11:51.115873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.474 [2024-07-27 01:11:51.221095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:59.474 [2024-07-27 01:11:51.221258] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:59.474 [2024-07-27 01:11:51.221275] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 510169' to capture a snapshot of events at runtime. 00:03:59.474 [2024-07-27 01:11:51.221292] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid510169 for offline analysis/debug. 00:03:59.474 [2024-07-27 01:11:51.221322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.409 01:11:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:00.409 01:11:52 -- common/autotest_common.sh@852 -- # return 0 00:04:00.409 01:11:52 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.409 01:11:52 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.409 01:11:52 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:00.409 01:11:52 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:00.409 01:11:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.409 01:11:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.409 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.409 ************************************ 00:04:00.409 START TEST rpc_integrity 00:04:00.409 ************************************ 00:04:00.409 01:11:52 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:00.409 01:11:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.409 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.409 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.409 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.409 01:11:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.409 01:11:52 -- rpc/rpc.sh@13 -- # jq length 00:04:00.409 01:11:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.409 01:11:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.409 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.409 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.409 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.409 01:11:52 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:00.409 01:11:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.409 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.409 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.409 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.409 01:11:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.409 { 00:04:00.409 "name": "Malloc0", 00:04:00.409 "aliases": [ 00:04:00.409 "2f32aae1-9ee4-4a7a-816b-44a9615e44e2" 00:04:00.409 ], 00:04:00.409 "product_name": "Malloc disk", 00:04:00.409 "block_size": 512, 00:04:00.409 "num_blocks": 16384, 00:04:00.409 "uuid": "2f32aae1-9ee4-4a7a-816b-44a9615e44e2", 00:04:00.409 "assigned_rate_limits": { 00:04:00.409 "rw_ios_per_sec": 0, 00:04:00.409 "rw_mbytes_per_sec": 0, 00:04:00.409 "r_mbytes_per_sec": 0, 00:04:00.409 "w_mbytes_per_sec": 0 00:04:00.409 }, 00:04:00.409 "claimed": false, 00:04:00.409 "zoned": false, 00:04:00.409 "supported_io_types": { 00:04:00.409 "read": true, 00:04:00.409 "write": true, 00:04:00.409 "unmap": true, 00:04:00.409 "write_zeroes": true, 00:04:00.409 "flush": true, 00:04:00.409 "reset": true, 00:04:00.409 "compare": false, 00:04:00.409 "compare_and_write": false, 00:04:00.409 "abort": true, 00:04:00.409 "nvme_admin": false, 00:04:00.409 "nvme_io": false 00:04:00.409 }, 00:04:00.409 "memory_domains": [ 00:04:00.409 { 00:04:00.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.409 "dma_device_type": 2 00:04:00.409 } 00:04:00.409 ], 00:04:00.409 "driver_specific": {} 00:04:00.409 } 00:04:00.409 ]' 00:04:00.409 01:11:52 -- rpc/rpc.sh@17 -- # jq length 00:04:00.409 01:11:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.409 01:11:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:00.409 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.409 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.409 [2024-07-27 01:11:52.144234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:00.409 [2024-07-27 01:11:52.144277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.409 [2024-07-27 01:11:52.144299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa6cf70 00:04:00.409 [2024-07-27 01:11:52.144313] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.409 [2024-07-27 01:11:52.145833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.409 [2024-07-27 01:11:52.145861] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.409 Passthru0 00:04:00.409 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.409 01:11:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.409 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.409 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.409 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.409 01:11:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:00.409 { 00:04:00.409 "name": "Malloc0", 00:04:00.409 "aliases": [ 00:04:00.409 "2f32aae1-9ee4-4a7a-816b-44a9615e44e2" 00:04:00.409 ], 00:04:00.409 "product_name": "Malloc disk", 00:04:00.409 "block_size": 512, 00:04:00.409 "num_blocks": 16384, 00:04:00.409 "uuid": "2f32aae1-9ee4-4a7a-816b-44a9615e44e2", 00:04:00.409 "assigned_rate_limits": { 00:04:00.409 "rw_ios_per_sec": 0, 00:04:00.409 "rw_mbytes_per_sec": 0, 00:04:00.409 "r_mbytes_per_sec": 0, 00:04:00.409 "w_mbytes_per_sec": 0 00:04:00.409 }, 00:04:00.409 "claimed": true, 00:04:00.409 "claim_type": "exclusive_write", 00:04:00.409 "zoned": false, 00:04:00.409 "supported_io_types": { 00:04:00.409 "read": true, 00:04:00.409 "write": true, 00:04:00.409 "unmap": true, 00:04:00.409 "write_zeroes": true, 00:04:00.409 "flush": true, 00:04:00.409 "reset": true, 00:04:00.409 "compare": false, 00:04:00.409 "compare_and_write": false, 00:04:00.409 "abort": true, 00:04:00.409 "nvme_admin": false, 00:04:00.409 "nvme_io": false 00:04:00.409 }, 00:04:00.409 "memory_domains": [ 00:04:00.409 { 00:04:00.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.409 "dma_device_type": 2 00:04:00.409 } 00:04:00.409 ], 00:04:00.409 "driver_specific": {} 00:04:00.409 }, 00:04:00.409 { 00:04:00.409 "name": "Passthru0", 00:04:00.409 "aliases": [ 00:04:00.409 "d82fd043-8607-5158-9681-f09a5506861c" 00:04:00.409 ], 00:04:00.409 "product_name": "passthru", 00:04:00.409 "block_size": 512, 00:04:00.409 "num_blocks": 16384, 00:04:00.409 "uuid": "d82fd043-8607-5158-9681-f09a5506861c", 00:04:00.409 "assigned_rate_limits": { 00:04:00.409 "rw_ios_per_sec": 0, 00:04:00.409 "rw_mbytes_per_sec": 0, 00:04:00.409 "r_mbytes_per_sec": 0, 00:04:00.409 "w_mbytes_per_sec": 0 00:04:00.409 }, 00:04:00.409 "claimed": false, 00:04:00.409 "zoned": false, 00:04:00.409 "supported_io_types": { 00:04:00.409 "read": true, 00:04:00.409 "write": true, 00:04:00.409 "unmap": true, 00:04:00.409 "write_zeroes": true, 00:04:00.409 "flush": true, 00:04:00.409 "reset": true, 00:04:00.409 "compare": false, 00:04:00.409 "compare_and_write": false, 00:04:00.409 "abort": true, 00:04:00.409 "nvme_admin": false, 00:04:00.409 "nvme_io": false 00:04:00.409 }, 00:04:00.409 "memory_domains": [ 00:04:00.409 { 00:04:00.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.409 "dma_device_type": 2 00:04:00.409 } 00:04:00.409 ], 00:04:00.409 "driver_specific": { 00:04:00.409 "passthru": { 00:04:00.409 "name": "Passthru0", 00:04:00.409 "base_bdev_name": "Malloc0" 00:04:00.409 } 00:04:00.409 } 00:04:00.409 } 00:04:00.409 ]' 00:04:00.409 01:11:52 -- rpc/rpc.sh@21 -- # jq length 00:04:00.668 01:11:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:00.668 01:11:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:00.668 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.668 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.668 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.668 01:11:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:00.668 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.668 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.668 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.668 01:11:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.668 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.668 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.668 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.668 01:11:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.668 01:11:52 -- rpc/rpc.sh@26 -- # jq length 00:04:00.668 01:11:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.668 00:04:00.668 real 0m0.226s 00:04:00.668 user 0m0.144s 00:04:00.668 sys 0m0.024s 00:04:00.668 01:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.668 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.668 ************************************ 00:04:00.668 END TEST rpc_integrity 00:04:00.668 ************************************ 00:04:00.668 01:11:52 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:00.668 01:11:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.668 01:11:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.668 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.668 ************************************ 00:04:00.668 START TEST rpc_plugins 00:04:00.668 ************************************ 00:04:00.668 01:11:52 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:00.668 01:11:52 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:00.668 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.668 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.668 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.668 01:11:52 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:00.668 01:11:52 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:00.668 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.668 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.668 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.668 01:11:52 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:00.668 { 00:04:00.668 "name": "Malloc1", 00:04:00.668 "aliases": [ 00:04:00.668 "11b991ed-f524-4866-be74-f645bb4b3583" 00:04:00.668 ], 00:04:00.668 "product_name": "Malloc disk", 00:04:00.668 "block_size": 4096, 00:04:00.668 "num_blocks": 256, 00:04:00.668 "uuid": "11b991ed-f524-4866-be74-f645bb4b3583", 00:04:00.668 "assigned_rate_limits": { 00:04:00.668 "rw_ios_per_sec": 0, 00:04:00.668 "rw_mbytes_per_sec": 0, 00:04:00.668 "r_mbytes_per_sec": 0, 00:04:00.668 "w_mbytes_per_sec": 0 00:04:00.668 }, 00:04:00.668 "claimed": false, 00:04:00.668 "zoned": false, 00:04:00.668 "supported_io_types": { 00:04:00.668 "read": true, 00:04:00.668 "write": true, 00:04:00.668 "unmap": true, 00:04:00.668 "write_zeroes": true, 00:04:00.668 "flush": true, 00:04:00.668 "reset": true, 00:04:00.668 "compare": false, 00:04:00.668 "compare_and_write": false, 00:04:00.668 "abort": true, 00:04:00.668 "nvme_admin": false, 00:04:00.668 "nvme_io": false 00:04:00.669 }, 00:04:00.669 "memory_domains": [ 00:04:00.669 { 00:04:00.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.669 "dma_device_type": 2 00:04:00.669 } 00:04:00.669 ], 00:04:00.669 "driver_specific": {} 00:04:00.669 } 00:04:00.669 ]' 00:04:00.669 01:11:52 -- rpc/rpc.sh@32 -- # jq length 00:04:00.669 01:11:52 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:00.669 01:11:52 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:00.669 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.669 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.669 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.669 01:11:52 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:00.669 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.669 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.669 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.669 01:11:52 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:00.669 01:11:52 -- rpc/rpc.sh@36 -- # jq length 00:04:00.669 01:11:52 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:00.669 00:04:00.669 real 0m0.110s 00:04:00.669 user 0m0.076s 00:04:00.669 sys 0m0.007s 00:04:00.669 01:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.669 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.669 ************************************ 00:04:00.669 END TEST rpc_plugins 00:04:00.669 ************************************ 00:04:00.669 01:11:52 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:00.669 01:11:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.669 01:11:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.669 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.669 ************************************ 00:04:00.669 START TEST rpc_trace_cmd_test 00:04:00.669 ************************************ 00:04:00.669 01:11:52 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:00.669 01:11:52 -- rpc/rpc.sh@40 -- # local info 00:04:00.927 01:11:52 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:00.927 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.927 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.927 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.927 01:11:52 -- rpc/rpc.sh@42 -- # info='{ 00:04:00.927 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid510169", 00:04:00.927 "tpoint_group_mask": "0x8", 00:04:00.927 "iscsi_conn": { 00:04:00.927 "mask": "0x2", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "scsi": { 00:04:00.927 "mask": "0x4", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "bdev": { 00:04:00.927 "mask": "0x8", 00:04:00.927 "tpoint_mask": "0xffffffffffffffff" 00:04:00.927 }, 00:04:00.927 "nvmf_rdma": { 00:04:00.927 "mask": "0x10", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "nvmf_tcp": { 00:04:00.927 "mask": "0x20", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "ftl": { 00:04:00.927 "mask": "0x40", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "blobfs": { 00:04:00.927 "mask": "0x80", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "dsa": { 00:04:00.927 "mask": "0x200", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "thread": { 00:04:00.927 "mask": "0x400", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "nvme_pcie": { 00:04:00.927 "mask": "0x800", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "iaa": { 00:04:00.927 "mask": "0x1000", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "nvme_tcp": { 00:04:00.927 "mask": "0x2000", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 }, 00:04:00.927 "bdev_nvme": { 00:04:00.927 "mask": "0x4000", 00:04:00.927 "tpoint_mask": "0x0" 00:04:00.927 } 00:04:00.927 }' 00:04:00.927 01:11:52 -- rpc/rpc.sh@43 -- # jq length 00:04:00.927 01:11:52 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:00.927 01:11:52 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:00.927 01:11:52 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:00.927 01:11:52 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:00.927 01:11:52 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:00.927 01:11:52 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:00.927 01:11:52 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:00.927 01:11:52 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:00.927 01:11:52 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:00.927 00:04:00.927 real 0m0.193s 00:04:00.927 user 0m0.173s 00:04:00.927 sys 0m0.013s 00:04:00.927 01:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.927 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.927 ************************************ 00:04:00.927 END TEST rpc_trace_cmd_test 00:04:00.927 ************************************ 00:04:00.927 01:11:52 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:00.927 01:11:52 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:00.927 01:11:52 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:00.927 01:11:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.927 01:11:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.927 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.927 ************************************ 00:04:00.927 START TEST rpc_daemon_integrity 00:04:00.927 ************************************ 00:04:00.927 01:11:52 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:00.927 01:11:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.927 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:00.927 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:00.927 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:00.927 01:11:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.927 01:11:52 -- rpc/rpc.sh@13 -- # jq length 00:04:01.186 01:11:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.186 01:11:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.186 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:01.186 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:01.186 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:01.186 01:11:52 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:01.186 01:11:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.186 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:01.186 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:01.186 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:01.186 01:11:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.186 { 00:04:01.186 "name": "Malloc2", 00:04:01.186 "aliases": [ 00:04:01.186 "bf6bca93-1cb1-45ba-8bae-c97e84543f63" 00:04:01.186 ], 00:04:01.186 "product_name": "Malloc disk", 00:04:01.186 "block_size": 512, 00:04:01.186 "num_blocks": 16384, 00:04:01.186 "uuid": "bf6bca93-1cb1-45ba-8bae-c97e84543f63", 00:04:01.186 "assigned_rate_limits": { 00:04:01.186 "rw_ios_per_sec": 0, 00:04:01.186 "rw_mbytes_per_sec": 0, 00:04:01.186 "r_mbytes_per_sec": 0, 00:04:01.186 "w_mbytes_per_sec": 0 00:04:01.186 }, 00:04:01.186 "claimed": false, 00:04:01.186 "zoned": false, 00:04:01.186 "supported_io_types": { 00:04:01.186 "read": true, 00:04:01.186 "write": true, 00:04:01.186 "unmap": true, 00:04:01.186 "write_zeroes": true, 00:04:01.186 "flush": true, 00:04:01.186 "reset": true, 00:04:01.186 "compare": false, 00:04:01.186 "compare_and_write": false, 00:04:01.186 "abort": true, 00:04:01.186 "nvme_admin": false, 00:04:01.186 "nvme_io": false 00:04:01.186 }, 00:04:01.186 "memory_domains": [ 00:04:01.186 { 00:04:01.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.186 "dma_device_type": 2 00:04:01.186 } 00:04:01.186 ], 00:04:01.186 "driver_specific": {} 00:04:01.186 } 00:04:01.186 ]' 00:04:01.186 01:11:52 -- rpc/rpc.sh@17 -- # jq length 00:04:01.186 01:11:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.186 01:11:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:01.186 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:01.186 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:01.186 [2024-07-27 01:11:52.754036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:01.186 [2024-07-27 01:11:52.754117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.186 [2024-07-27 01:11:52.754140] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc0c970 00:04:01.186 [2024-07-27 01:11:52.754156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.186 [2024-07-27 01:11:52.755497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.186 [2024-07-27 01:11:52.755525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.186 Passthru0 00:04:01.186 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:01.186 01:11:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.186 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:01.186 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:01.186 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:01.186 01:11:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.186 { 00:04:01.186 "name": "Malloc2", 00:04:01.186 "aliases": [ 00:04:01.186 "bf6bca93-1cb1-45ba-8bae-c97e84543f63" 00:04:01.186 ], 00:04:01.186 "product_name": "Malloc disk", 00:04:01.186 "block_size": 512, 00:04:01.186 "num_blocks": 16384, 00:04:01.186 "uuid": "bf6bca93-1cb1-45ba-8bae-c97e84543f63", 00:04:01.186 "assigned_rate_limits": { 00:04:01.186 "rw_ios_per_sec": 0, 00:04:01.186 "rw_mbytes_per_sec": 0, 00:04:01.186 "r_mbytes_per_sec": 0, 00:04:01.186 "w_mbytes_per_sec": 0 00:04:01.186 }, 00:04:01.186 "claimed": true, 00:04:01.186 "claim_type": "exclusive_write", 00:04:01.186 "zoned": false, 00:04:01.186 "supported_io_types": { 00:04:01.186 "read": true, 00:04:01.186 "write": true, 00:04:01.186 "unmap": true, 00:04:01.186 "write_zeroes": true, 00:04:01.186 "flush": true, 00:04:01.186 "reset": true, 00:04:01.186 "compare": false, 00:04:01.186 "compare_and_write": false, 00:04:01.186 "abort": true, 00:04:01.186 "nvme_admin": false, 00:04:01.186 "nvme_io": false 00:04:01.186 }, 00:04:01.186 "memory_domains": [ 00:04:01.186 { 00:04:01.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.186 "dma_device_type": 2 00:04:01.186 } 00:04:01.186 ], 00:04:01.186 "driver_specific": {} 00:04:01.186 }, 00:04:01.186 { 00:04:01.186 "name": "Passthru0", 00:04:01.186 "aliases": [ 00:04:01.186 "9a2239ee-67d7-5425-9aff-db14ce36f978" 00:04:01.186 ], 00:04:01.186 "product_name": "passthru", 00:04:01.186 "block_size": 512, 00:04:01.186 "num_blocks": 16384, 00:04:01.186 "uuid": "9a2239ee-67d7-5425-9aff-db14ce36f978", 00:04:01.186 "assigned_rate_limits": { 00:04:01.186 "rw_ios_per_sec": 0, 00:04:01.186 "rw_mbytes_per_sec": 0, 00:04:01.186 "r_mbytes_per_sec": 0, 00:04:01.186 "w_mbytes_per_sec": 0 00:04:01.186 }, 00:04:01.186 "claimed": false, 00:04:01.186 "zoned": false, 00:04:01.186 "supported_io_types": { 00:04:01.186 "read": true, 00:04:01.186 "write": true, 00:04:01.186 "unmap": true, 00:04:01.186 "write_zeroes": true, 00:04:01.186 "flush": true, 00:04:01.186 "reset": true, 00:04:01.186 "compare": false, 00:04:01.186 "compare_and_write": false, 00:04:01.186 "abort": true, 00:04:01.186 "nvme_admin": false, 00:04:01.186 "nvme_io": false 00:04:01.186 }, 00:04:01.186 "memory_domains": [ 00:04:01.186 { 00:04:01.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.186 "dma_device_type": 2 00:04:01.186 } 00:04:01.186 ], 00:04:01.186 "driver_specific": { 00:04:01.186 "passthru": { 00:04:01.186 "name": "Passthru0", 00:04:01.186 "base_bdev_name": "Malloc2" 00:04:01.186 } 00:04:01.186 } 00:04:01.186 } 00:04:01.186 ]' 00:04:01.186 01:11:52 -- rpc/rpc.sh@21 -- # jq length 00:04:01.186 01:11:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.186 01:11:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.186 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:01.186 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:01.186 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:01.186 01:11:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:01.186 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:01.186 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:01.186 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:01.186 01:11:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.186 01:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:01.186 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:01.186 01:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:01.186 01:11:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.186 01:11:52 -- rpc/rpc.sh@26 -- # jq length 00:04:01.186 01:11:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.186 00:04:01.186 real 0m0.228s 00:04:01.186 user 0m0.149s 00:04:01.186 sys 0m0.019s 00:04:01.186 01:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.186 01:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:01.186 ************************************ 00:04:01.186 END TEST rpc_daemon_integrity 00:04:01.186 ************************************ 00:04:01.186 01:11:52 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:01.186 01:11:52 -- rpc/rpc.sh@84 -- # killprocess 510169 00:04:01.186 01:11:52 -- common/autotest_common.sh@926 -- # '[' -z 510169 ']' 00:04:01.186 01:11:52 -- common/autotest_common.sh@930 -- # kill -0 510169 00:04:01.186 01:11:52 -- common/autotest_common.sh@931 -- # uname 00:04:01.186 01:11:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:01.186 01:11:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 510169 00:04:01.186 01:11:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:01.186 01:11:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:01.186 01:11:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 510169' 00:04:01.186 killing process with pid 510169 00:04:01.186 01:11:52 -- common/autotest_common.sh@945 -- # kill 510169 00:04:01.186 01:11:52 -- common/autotest_common.sh@950 -- # wait 510169 00:04:01.752 00:04:01.752 real 0m2.397s 00:04:01.752 user 0m3.074s 00:04:01.752 sys 0m0.563s 00:04:01.752 01:11:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.752 01:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.752 ************************************ 00:04:01.752 END TEST rpc 00:04:01.752 ************************************ 00:04:01.752 01:11:53 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:01.752 01:11:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.752 01:11:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.752 01:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.752 ************************************ 00:04:01.752 START TEST rpc_client 00:04:01.752 ************************************ 00:04:01.752 01:11:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:01.752 * Looking for test storage... 00:04:01.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:01.752 01:11:53 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:01.752 OK 00:04:01.752 01:11:53 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:01.752 00:04:01.752 real 0m0.067s 00:04:01.752 user 0m0.033s 00:04:01.752 sys 0m0.039s 00:04:01.752 01:11:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.752 01:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.752 ************************************ 00:04:01.752 END TEST rpc_client 00:04:01.753 ************************************ 00:04:01.753 01:11:53 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:01.753 01:11:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.753 01:11:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.753 01:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.753 ************************************ 00:04:01.753 START TEST json_config 00:04:01.753 ************************************ 00:04:01.753 01:11:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:02.010 01:11:53 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:02.010 01:11:53 -- nvmf/common.sh@7 -- # uname -s 00:04:02.010 01:11:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.010 01:11:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.010 01:11:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.010 01:11:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.010 01:11:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.010 01:11:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.010 01:11:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.010 01:11:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.010 01:11:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.010 01:11:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.010 01:11:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:02.010 01:11:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:02.010 01:11:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.010 01:11:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.010 01:11:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.010 01:11:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.010 01:11:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.010 01:11:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.010 01:11:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.010 01:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.010 01:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.010 01:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.010 01:11:53 -- paths/export.sh@5 -- # export PATH 00:04:02.011 01:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.011 01:11:53 -- nvmf/common.sh@46 -- # : 0 00:04:02.011 01:11:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:02.011 01:11:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:02.011 01:11:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:02.011 01:11:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.011 01:11:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.011 01:11:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:02.011 01:11:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:02.011 01:11:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:02.011 01:11:53 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:02.011 01:11:53 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:02.011 01:11:53 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:02.011 01:11:53 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:02.011 01:11:53 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:02.011 01:11:53 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:02.011 01:11:53 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:02.011 01:11:53 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:02.011 01:11:53 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:02.011 01:11:53 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:02.011 01:11:53 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:02.011 01:11:53 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:02.011 01:11:53 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:02.011 01:11:53 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:02.011 01:11:53 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:02.011 INFO: JSON configuration test init 00:04:02.011 01:11:53 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:02.011 01:11:53 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:02.011 01:11:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:02.011 01:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:02.011 01:11:53 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:02.011 01:11:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:02.011 01:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:02.011 01:11:53 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:02.011 01:11:53 -- json_config/json_config.sh@98 -- # local app=target 00:04:02.011 01:11:53 -- json_config/json_config.sh@99 -- # shift 00:04:02.011 01:11:53 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:02.011 01:11:53 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:02.011 01:11:53 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:02.011 01:11:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:02.011 01:11:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:02.011 01:11:53 -- json_config/json_config.sh@111 -- # app_pid[$app]=510651 00:04:02.011 01:11:53 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:02.011 01:11:53 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:02.011 Waiting for target to run... 00:04:02.011 01:11:53 -- json_config/json_config.sh@114 -- # waitforlisten 510651 /var/tmp/spdk_tgt.sock 00:04:02.011 01:11:53 -- common/autotest_common.sh@819 -- # '[' -z 510651 ']' 00:04:02.011 01:11:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.011 01:11:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:02.011 01:11:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.011 01:11:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:02.011 01:11:53 -- common/autotest_common.sh@10 -- # set +x 00:04:02.011 [2024-07-27 01:11:53.580706] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:02.011 [2024-07-27 01:11:53.580797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510651 ] 00:04:02.011 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.577 [2024-07-27 01:11:54.077304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.577 [2024-07-27 01:11:54.179389] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:02.577 [2024-07-27 01:11:54.179601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.843 01:11:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:02.843 01:11:54 -- common/autotest_common.sh@852 -- # return 0 00:04:02.843 01:11:54 -- json_config/json_config.sh@115 -- # echo '' 00:04:02.843 00:04:02.843 01:11:54 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:02.843 01:11:54 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:02.843 01:11:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:02.843 01:11:54 -- common/autotest_common.sh@10 -- # set +x 00:04:02.843 01:11:54 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:02.843 01:11:54 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:02.843 01:11:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:02.843 01:11:54 -- common/autotest_common.sh@10 -- # set +x 00:04:02.843 01:11:54 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:02.843 01:11:54 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:02.843 01:11:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:06.174 01:11:57 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:06.174 01:11:57 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:06.174 01:11:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:06.174 01:11:57 -- common/autotest_common.sh@10 -- # set +x 00:04:06.174 01:11:57 -- json_config/json_config.sh@48 -- # local ret=0 00:04:06.174 01:11:57 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:06.174 01:11:57 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:06.174 01:11:57 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:06.174 01:11:57 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:06.174 01:11:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:06.432 01:11:57 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:06.432 01:11:57 -- json_config/json_config.sh@51 -- # local get_types 00:04:06.432 01:11:57 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:06.432 01:11:57 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:06.432 01:11:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:06.432 01:11:57 -- common/autotest_common.sh@10 -- # set +x 00:04:06.432 01:11:57 -- json_config/json_config.sh@58 -- # return 0 00:04:06.432 01:11:57 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:06.432 01:11:57 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:06.432 01:11:57 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:06.432 01:11:57 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:06.432 01:11:57 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:06.432 01:11:57 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:06.432 01:11:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:06.432 01:11:57 -- common/autotest_common.sh@10 -- # set +x 00:04:06.432 01:11:57 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:06.432 01:11:57 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:06.432 01:11:57 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:06.432 01:11:57 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.432 01:11:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.690 MallocForNvmf0 00:04:06.690 01:11:58 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:06.690 01:11:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:06.948 MallocForNvmf1 00:04:06.948 01:11:58 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:06.948 01:11:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:06.948 [2024-07-27 01:11:58.681226] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:06.948 01:11:58 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:06.949 01:11:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.206 01:11:58 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.206 01:11:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.464 01:11:59 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.464 01:11:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.722 01:11:59 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.722 01:11:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.980 [2024-07-27 01:11:59.632435] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:07.980 01:11:59 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:07.980 01:11:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:07.980 01:11:59 -- common/autotest_common.sh@10 -- # set +x 00:04:07.980 01:11:59 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:07.980 01:11:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:07.980 01:11:59 -- common/autotest_common.sh@10 -- # set +x 00:04:07.980 01:11:59 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:07.980 01:11:59 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:07.980 01:11:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.238 MallocBdevForConfigChangeCheck 00:04:08.238 01:11:59 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:08.238 01:11:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:08.238 01:11:59 -- common/autotest_common.sh@10 -- # set +x 00:04:08.238 01:11:59 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:08.238 01:11:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.804 01:12:00 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:08.804 INFO: shutting down applications... 00:04:08.804 01:12:00 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:08.804 01:12:00 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:08.804 01:12:00 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:08.804 01:12:00 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:10.175 Calling clear_iscsi_subsystem 00:04:10.175 Calling clear_nvmf_subsystem 00:04:10.175 Calling clear_nbd_subsystem 00:04:10.175 Calling clear_ublk_subsystem 00:04:10.175 Calling clear_vhost_blk_subsystem 00:04:10.175 Calling clear_vhost_scsi_subsystem 00:04:10.175 Calling clear_scheduler_subsystem 00:04:10.175 Calling clear_bdev_subsystem 00:04:10.175 Calling clear_accel_subsystem 00:04:10.175 Calling clear_vmd_subsystem 00:04:10.175 Calling clear_sock_subsystem 00:04:10.175 Calling clear_iobuf_subsystem 00:04:10.433 01:12:01 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:10.433 01:12:01 -- json_config/json_config.sh@396 -- # count=100 00:04:10.433 01:12:01 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:10.433 01:12:01 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.433 01:12:01 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:10.433 01:12:01 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:10.691 01:12:02 -- json_config/json_config.sh@398 -- # break 00:04:10.691 01:12:02 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:10.691 01:12:02 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:10.691 01:12:02 -- json_config/json_config.sh@120 -- # local app=target 00:04:10.691 01:12:02 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:10.691 01:12:02 -- json_config/json_config.sh@124 -- # [[ -n 510651 ]] 00:04:10.691 01:12:02 -- json_config/json_config.sh@127 -- # kill -SIGINT 510651 00:04:10.691 01:12:02 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:10.691 01:12:02 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:10.691 01:12:02 -- json_config/json_config.sh@130 -- # kill -0 510651 00:04:10.691 01:12:02 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:11.260 01:12:02 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:11.260 01:12:02 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:11.260 01:12:02 -- json_config/json_config.sh@130 -- # kill -0 510651 00:04:11.260 01:12:02 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:11.260 01:12:02 -- json_config/json_config.sh@132 -- # break 00:04:11.260 01:12:02 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:11.260 01:12:02 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:11.260 SPDK target shutdown done 00:04:11.260 01:12:02 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:11.260 INFO: relaunching applications... 00:04:11.260 01:12:02 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.260 01:12:02 -- json_config/json_config.sh@98 -- # local app=target 00:04:11.260 01:12:02 -- json_config/json_config.sh@99 -- # shift 00:04:11.260 01:12:02 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:11.260 01:12:02 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:11.260 01:12:02 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:11.260 01:12:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:11.260 01:12:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:11.260 01:12:02 -- json_config/json_config.sh@111 -- # app_pid[$app]=511989 00:04:11.260 01:12:02 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.260 01:12:02 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:11.260 Waiting for target to run... 00:04:11.260 01:12:02 -- json_config/json_config.sh@114 -- # waitforlisten 511989 /var/tmp/spdk_tgt.sock 00:04:11.260 01:12:02 -- common/autotest_common.sh@819 -- # '[' -z 511989 ']' 00:04:11.260 01:12:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.260 01:12:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:11.260 01:12:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.260 01:12:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:11.260 01:12:02 -- common/autotest_common.sh@10 -- # set +x 00:04:11.260 [2024-07-27 01:12:02.842741] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:11.260 [2024-07-27 01:12:02.842829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511989 ] 00:04:11.260 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.826 [2024-07-27 01:12:03.343307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.826 [2024-07-27 01:12:03.445164] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:11.826 [2024-07-27 01:12:03.445380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.110 [2024-07-27 01:12:06.481976] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.110 [2024-07-27 01:12:06.514453] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:15.110 01:12:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:15.110 01:12:06 -- common/autotest_common.sh@852 -- # return 0 00:04:15.110 01:12:06 -- json_config/json_config.sh@115 -- # echo '' 00:04:15.110 00:04:15.110 01:12:06 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:15.110 01:12:06 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:15.110 INFO: Checking if target configuration is the same... 00:04:15.110 01:12:06 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.110 01:12:06 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:15.110 01:12:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.110 + '[' 2 -ne 2 ']' 00:04:15.110 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:15.110 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:15.110 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:15.110 +++ basename /dev/fd/62 00:04:15.110 ++ mktemp /tmp/62.XXX 00:04:15.110 + tmp_file_1=/tmp/62.RNK 00:04:15.110 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.110 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:15.110 + tmp_file_2=/tmp/spdk_tgt_config.json.DyD 00:04:15.110 + ret=0 00:04:15.110 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.368 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.626 + diff -u /tmp/62.RNK /tmp/spdk_tgt_config.json.DyD 00:04:15.626 + echo 'INFO: JSON config files are the same' 00:04:15.626 INFO: JSON config files are the same 00:04:15.626 + rm /tmp/62.RNK /tmp/spdk_tgt_config.json.DyD 00:04:15.626 + exit 0 00:04:15.626 01:12:07 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:15.626 01:12:07 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:15.626 INFO: changing configuration and checking if this can be detected... 00:04:15.626 01:12:07 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:15.626 01:12:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:15.884 01:12:07 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.884 01:12:07 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:15.884 01:12:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.884 + '[' 2 -ne 2 ']' 00:04:15.884 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:15.884 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:15.884 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:15.884 +++ basename /dev/fd/62 00:04:15.884 ++ mktemp /tmp/62.XXX 00:04:15.884 + tmp_file_1=/tmp/62.1Kt 00:04:15.884 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.884 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:15.884 + tmp_file_2=/tmp/spdk_tgt_config.json.C20 00:04:15.884 + ret=0 00:04:15.884 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.142 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.142 + diff -u /tmp/62.1Kt /tmp/spdk_tgt_config.json.C20 00:04:16.142 + ret=1 00:04:16.142 + echo '=== Start of file: /tmp/62.1Kt ===' 00:04:16.142 + cat /tmp/62.1Kt 00:04:16.142 + echo '=== End of file: /tmp/62.1Kt ===' 00:04:16.142 + echo '' 00:04:16.142 + echo '=== Start of file: /tmp/spdk_tgt_config.json.C20 ===' 00:04:16.142 + cat /tmp/spdk_tgt_config.json.C20 00:04:16.142 + echo '=== End of file: /tmp/spdk_tgt_config.json.C20 ===' 00:04:16.142 + echo '' 00:04:16.142 + rm /tmp/62.1Kt /tmp/spdk_tgt_config.json.C20 00:04:16.142 + exit 1 00:04:16.142 01:12:07 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:16.142 INFO: configuration change detected. 00:04:16.142 01:12:07 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:16.142 01:12:07 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:16.142 01:12:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:16.142 01:12:07 -- common/autotest_common.sh@10 -- # set +x 00:04:16.142 01:12:07 -- json_config/json_config.sh@360 -- # local ret=0 00:04:16.142 01:12:07 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:16.142 01:12:07 -- json_config/json_config.sh@370 -- # [[ -n 511989 ]] 00:04:16.142 01:12:07 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:16.142 01:12:07 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:16.142 01:12:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:16.142 01:12:07 -- common/autotest_common.sh@10 -- # set +x 00:04:16.142 01:12:07 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:16.142 01:12:07 -- json_config/json_config.sh@246 -- # uname -s 00:04:16.142 01:12:07 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:16.142 01:12:07 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:16.142 01:12:07 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:16.142 01:12:07 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:16.142 01:12:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:16.142 01:12:07 -- common/autotest_common.sh@10 -- # set +x 00:04:16.142 01:12:07 -- json_config/json_config.sh@376 -- # killprocess 511989 00:04:16.142 01:12:07 -- common/autotest_common.sh@926 -- # '[' -z 511989 ']' 00:04:16.142 01:12:07 -- common/autotest_common.sh@930 -- # kill -0 511989 00:04:16.142 01:12:07 -- common/autotest_common.sh@931 -- # uname 00:04:16.142 01:12:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:16.142 01:12:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 511989 00:04:16.142 01:12:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:16.142 01:12:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:16.142 01:12:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 511989' 00:04:16.142 killing process with pid 511989 00:04:16.142 01:12:07 -- common/autotest_common.sh@945 -- # kill 511989 00:04:16.142 01:12:07 -- common/autotest_common.sh@950 -- # wait 511989 00:04:18.041 01:12:09 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.041 01:12:09 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:18.041 01:12:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:18.041 01:12:09 -- common/autotest_common.sh@10 -- # set +x 00:04:18.041 01:12:09 -- json_config/json_config.sh@381 -- # return 0 00:04:18.041 01:12:09 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:18.041 INFO: Success 00:04:18.041 00:04:18.041 real 0m16.024s 00:04:18.041 user 0m18.219s 00:04:18.041 sys 0m2.223s 00:04:18.041 01:12:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.041 01:12:09 -- common/autotest_common.sh@10 -- # set +x 00:04:18.041 ************************************ 00:04:18.041 END TEST json_config 00:04:18.041 ************************************ 00:04:18.041 01:12:09 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:18.041 01:12:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.041 01:12:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.041 01:12:09 -- common/autotest_common.sh@10 -- # set +x 00:04:18.041 ************************************ 00:04:18.041 START TEST json_config_extra_key 00:04:18.041 ************************************ 00:04:18.041 01:12:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:18.041 01:12:09 -- nvmf/common.sh@7 -- # uname -s 00:04:18.041 01:12:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.041 01:12:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.041 01:12:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.041 01:12:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.041 01:12:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.041 01:12:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.041 01:12:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.041 01:12:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.041 01:12:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.041 01:12:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.041 01:12:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:18.041 01:12:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:18.041 01:12:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.041 01:12:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.041 01:12:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.041 01:12:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:18.041 01:12:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.041 01:12:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.041 01:12:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.041 01:12:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.041 01:12:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.041 01:12:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.041 01:12:09 -- paths/export.sh@5 -- # export PATH 00:04:18.041 01:12:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.041 01:12:09 -- nvmf/common.sh@46 -- # : 0 00:04:18.041 01:12:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:18.041 01:12:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:18.041 01:12:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:18.041 01:12:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.041 01:12:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.041 01:12:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:18.041 01:12:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:18.041 01:12:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:18.041 01:12:09 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:18.042 INFO: launching applications... 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=513450 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:18.042 Waiting for target to run... 00:04:18.042 01:12:09 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 513450 /var/tmp/spdk_tgt.sock 00:04:18.042 01:12:09 -- common/autotest_common.sh@819 -- # '[' -z 513450 ']' 00:04:18.042 01:12:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.042 01:12:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:18.042 01:12:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.042 01:12:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:18.042 01:12:09 -- common/autotest_common.sh@10 -- # set +x 00:04:18.042 [2024-07-27 01:12:09.628346] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:18.042 [2024-07-27 01:12:09.628476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513450 ] 00:04:18.042 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.608 [2024-07-27 01:12:10.140528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.608 [2024-07-27 01:12:10.245746] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:18.608 [2024-07-27 01:12:10.245946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.867 01:12:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:18.867 01:12:10 -- common/autotest_common.sh@852 -- # return 0 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:18.867 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:18.867 INFO: shutting down applications... 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 513450 ]] 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 513450 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 513450 00:04:18.867 01:12:10 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:19.435 01:12:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:19.435 01:12:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:19.435 01:12:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 513450 00:04:19.435 01:12:11 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:20.003 01:12:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:20.003 01:12:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:20.003 01:12:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 513450 00:04:20.003 01:12:11 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:20.003 01:12:11 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:20.003 01:12:11 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:20.003 01:12:11 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:20.003 SPDK target shutdown done 00:04:20.003 01:12:11 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:20.003 Success 00:04:20.003 00:04:20.003 real 0m2.050s 00:04:20.003 user 0m1.400s 00:04:20.003 sys 0m0.612s 00:04:20.003 01:12:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.003 01:12:11 -- common/autotest_common.sh@10 -- # set +x 00:04:20.003 ************************************ 00:04:20.003 END TEST json_config_extra_key 00:04:20.003 ************************************ 00:04:20.003 01:12:11 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:20.003 01:12:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.003 01:12:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.003 01:12:11 -- common/autotest_common.sh@10 -- # set +x 00:04:20.003 ************************************ 00:04:20.003 START TEST alias_rpc 00:04:20.003 ************************************ 00:04:20.003 01:12:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:20.003 * Looking for test storage... 00:04:20.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:20.003 01:12:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:20.003 01:12:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=513757 00:04:20.003 01:12:11 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.003 01:12:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 513757 00:04:20.003 01:12:11 -- common/autotest_common.sh@819 -- # '[' -z 513757 ']' 00:04:20.003 01:12:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.003 01:12:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:20.003 01:12:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.003 01:12:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:20.003 01:12:11 -- common/autotest_common.sh@10 -- # set +x 00:04:20.003 [2024-07-27 01:12:11.695325] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:20.003 [2024-07-27 01:12:11.695445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513757 ] 00:04:20.003 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.003 [2024-07-27 01:12:11.754647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.262 [2024-07-27 01:12:11.871246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:20.262 [2024-07-27 01:12:11.871402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.232 01:12:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:21.232 01:12:12 -- common/autotest_common.sh@852 -- # return 0 00:04:21.232 01:12:12 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:21.232 01:12:12 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 513757 00:04:21.232 01:12:12 -- common/autotest_common.sh@926 -- # '[' -z 513757 ']' 00:04:21.232 01:12:12 -- common/autotest_common.sh@930 -- # kill -0 513757 00:04:21.232 01:12:12 -- common/autotest_common.sh@931 -- # uname 00:04:21.232 01:12:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:21.232 01:12:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 513757 00:04:21.232 01:12:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:21.232 01:12:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:21.232 01:12:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 513757' 00:04:21.232 killing process with pid 513757 00:04:21.232 01:12:12 -- common/autotest_common.sh@945 -- # kill 513757 00:04:21.232 01:12:12 -- common/autotest_common.sh@950 -- # wait 513757 00:04:21.799 00:04:21.799 real 0m1.795s 00:04:21.799 user 0m2.094s 00:04:21.799 sys 0m0.440s 00:04:21.799 01:12:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.799 01:12:13 -- common/autotest_common.sh@10 -- # set +x 00:04:21.799 ************************************ 00:04:21.799 END TEST alias_rpc 00:04:21.799 ************************************ 00:04:21.799 01:12:13 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:21.799 01:12:13 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:21.799 01:12:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.799 01:12:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.799 01:12:13 -- common/autotest_common.sh@10 -- # set +x 00:04:21.799 ************************************ 00:04:21.799 START TEST spdkcli_tcp 00:04:21.799 ************************************ 00:04:21.799 01:12:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:21.799 * Looking for test storage... 00:04:21.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:21.799 01:12:13 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:21.799 01:12:13 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:21.799 01:12:13 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:21.799 01:12:13 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:21.799 01:12:13 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:21.799 01:12:13 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:21.799 01:12:13 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:21.799 01:12:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.799 01:12:13 -- common/autotest_common.sh@10 -- # set +x 00:04:21.799 01:12:13 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=514075 00:04:21.799 01:12:13 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:21.799 01:12:13 -- spdkcli/tcp.sh@27 -- # waitforlisten 514075 00:04:21.799 01:12:13 -- common/autotest_common.sh@819 -- # '[' -z 514075 ']' 00:04:21.799 01:12:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.799 01:12:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:21.799 01:12:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.799 01:12:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:21.799 01:12:13 -- common/autotest_common.sh@10 -- # set +x 00:04:21.800 [2024-07-27 01:12:13.526638] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:21.800 [2024-07-27 01:12:13.526721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514075 ] 00:04:21.800 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.056 [2024-07-27 01:12:13.583646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.056 [2024-07-27 01:12:13.687611] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:22.056 [2024-07-27 01:12:13.687829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.056 [2024-07-27 01:12:13.687834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.991 01:12:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:22.991 01:12:14 -- common/autotest_common.sh@852 -- # return 0 00:04:22.991 01:12:14 -- spdkcli/tcp.sh@31 -- # socat_pid=514216 00:04:22.991 01:12:14 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:22.991 01:12:14 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:22.991 [ 00:04:22.991 "bdev_malloc_delete", 00:04:22.991 "bdev_malloc_create", 00:04:22.991 "bdev_null_resize", 00:04:22.992 "bdev_null_delete", 00:04:22.992 "bdev_null_create", 00:04:22.992 "bdev_nvme_cuse_unregister", 00:04:22.992 "bdev_nvme_cuse_register", 00:04:22.992 "bdev_opal_new_user", 00:04:22.992 "bdev_opal_set_lock_state", 00:04:22.992 "bdev_opal_delete", 00:04:22.992 "bdev_opal_get_info", 00:04:22.992 "bdev_opal_create", 00:04:22.992 "bdev_nvme_opal_revert", 00:04:22.992 "bdev_nvme_opal_init", 00:04:22.992 "bdev_nvme_send_cmd", 00:04:22.992 "bdev_nvme_get_path_iostat", 00:04:22.992 "bdev_nvme_get_mdns_discovery_info", 00:04:22.992 "bdev_nvme_stop_mdns_discovery", 00:04:22.992 "bdev_nvme_start_mdns_discovery", 00:04:22.992 "bdev_nvme_set_multipath_policy", 00:04:22.992 "bdev_nvme_set_preferred_path", 00:04:22.992 "bdev_nvme_get_io_paths", 00:04:22.992 "bdev_nvme_remove_error_injection", 00:04:22.992 "bdev_nvme_add_error_injection", 00:04:22.992 "bdev_nvme_get_discovery_info", 00:04:22.992 "bdev_nvme_stop_discovery", 00:04:22.992 "bdev_nvme_start_discovery", 00:04:22.992 "bdev_nvme_get_controller_health_info", 00:04:22.992 "bdev_nvme_disable_controller", 00:04:22.992 "bdev_nvme_enable_controller", 00:04:22.992 "bdev_nvme_reset_controller", 00:04:22.992 "bdev_nvme_get_transport_statistics", 00:04:22.992 "bdev_nvme_apply_firmware", 00:04:22.992 "bdev_nvme_detach_controller", 00:04:22.992 "bdev_nvme_get_controllers", 00:04:22.992 "bdev_nvme_attach_controller", 00:04:22.992 "bdev_nvme_set_hotplug", 00:04:22.992 "bdev_nvme_set_options", 00:04:22.992 "bdev_passthru_delete", 00:04:22.992 "bdev_passthru_create", 00:04:22.992 "bdev_lvol_grow_lvstore", 00:04:22.992 "bdev_lvol_get_lvols", 00:04:22.992 "bdev_lvol_get_lvstores", 00:04:22.992 "bdev_lvol_delete", 00:04:22.992 "bdev_lvol_set_read_only", 00:04:22.992 "bdev_lvol_resize", 00:04:22.992 "bdev_lvol_decouple_parent", 00:04:22.992 "bdev_lvol_inflate", 00:04:22.992 "bdev_lvol_rename", 00:04:22.992 "bdev_lvol_clone_bdev", 00:04:22.992 "bdev_lvol_clone", 00:04:22.992 "bdev_lvol_snapshot", 00:04:22.992 "bdev_lvol_create", 00:04:22.992 "bdev_lvol_delete_lvstore", 00:04:22.992 "bdev_lvol_rename_lvstore", 00:04:22.992 "bdev_lvol_create_lvstore", 00:04:22.992 "bdev_raid_set_options", 00:04:22.992 "bdev_raid_remove_base_bdev", 00:04:22.992 "bdev_raid_add_base_bdev", 00:04:22.992 "bdev_raid_delete", 00:04:22.992 "bdev_raid_create", 00:04:22.992 "bdev_raid_get_bdevs", 00:04:22.992 "bdev_error_inject_error", 00:04:22.992 "bdev_error_delete", 00:04:22.992 "bdev_error_create", 00:04:22.992 "bdev_split_delete", 00:04:22.992 "bdev_split_create", 00:04:22.992 "bdev_delay_delete", 00:04:22.992 "bdev_delay_create", 00:04:22.992 "bdev_delay_update_latency", 00:04:22.992 "bdev_zone_block_delete", 00:04:22.992 "bdev_zone_block_create", 00:04:22.992 "blobfs_create", 00:04:22.992 "blobfs_detect", 00:04:22.992 "blobfs_set_cache_size", 00:04:22.992 "bdev_aio_delete", 00:04:22.992 "bdev_aio_rescan", 00:04:22.992 "bdev_aio_create", 00:04:22.992 "bdev_ftl_set_property", 00:04:22.992 "bdev_ftl_get_properties", 00:04:22.992 "bdev_ftl_get_stats", 00:04:22.992 "bdev_ftl_unmap", 00:04:22.992 "bdev_ftl_unload", 00:04:22.992 "bdev_ftl_delete", 00:04:22.992 "bdev_ftl_load", 00:04:22.992 "bdev_ftl_create", 00:04:22.992 "bdev_virtio_attach_controller", 00:04:22.992 "bdev_virtio_scsi_get_devices", 00:04:22.992 "bdev_virtio_detach_controller", 00:04:22.992 "bdev_virtio_blk_set_hotplug", 00:04:22.992 "bdev_iscsi_delete", 00:04:22.992 "bdev_iscsi_create", 00:04:22.992 "bdev_iscsi_set_options", 00:04:22.992 "accel_error_inject_error", 00:04:22.992 "ioat_scan_accel_module", 00:04:22.992 "dsa_scan_accel_module", 00:04:22.992 "iaa_scan_accel_module", 00:04:22.992 "iscsi_set_options", 00:04:22.992 "iscsi_get_auth_groups", 00:04:22.992 "iscsi_auth_group_remove_secret", 00:04:22.992 "iscsi_auth_group_add_secret", 00:04:22.992 "iscsi_delete_auth_group", 00:04:22.992 "iscsi_create_auth_group", 00:04:22.992 "iscsi_set_discovery_auth", 00:04:22.992 "iscsi_get_options", 00:04:22.992 "iscsi_target_node_request_logout", 00:04:22.992 "iscsi_target_node_set_redirect", 00:04:22.992 "iscsi_target_node_set_auth", 00:04:22.992 "iscsi_target_node_add_lun", 00:04:22.992 "iscsi_get_connections", 00:04:22.992 "iscsi_portal_group_set_auth", 00:04:22.992 "iscsi_start_portal_group", 00:04:22.992 "iscsi_delete_portal_group", 00:04:22.992 "iscsi_create_portal_group", 00:04:22.992 "iscsi_get_portal_groups", 00:04:22.992 "iscsi_delete_target_node", 00:04:22.992 "iscsi_target_node_remove_pg_ig_maps", 00:04:22.992 "iscsi_target_node_add_pg_ig_maps", 00:04:22.992 "iscsi_create_target_node", 00:04:22.992 "iscsi_get_target_nodes", 00:04:22.992 "iscsi_delete_initiator_group", 00:04:22.992 "iscsi_initiator_group_remove_initiators", 00:04:22.992 "iscsi_initiator_group_add_initiators", 00:04:22.992 "iscsi_create_initiator_group", 00:04:22.992 "iscsi_get_initiator_groups", 00:04:22.992 "nvmf_set_crdt", 00:04:22.992 "nvmf_set_config", 00:04:22.992 "nvmf_set_max_subsystems", 00:04:22.992 "nvmf_subsystem_get_listeners", 00:04:22.992 "nvmf_subsystem_get_qpairs", 00:04:22.992 "nvmf_subsystem_get_controllers", 00:04:22.992 "nvmf_get_stats", 00:04:22.992 "nvmf_get_transports", 00:04:22.992 "nvmf_create_transport", 00:04:22.992 "nvmf_get_targets", 00:04:22.992 "nvmf_delete_target", 00:04:22.992 "nvmf_create_target", 00:04:22.992 "nvmf_subsystem_allow_any_host", 00:04:22.992 "nvmf_subsystem_remove_host", 00:04:22.992 "nvmf_subsystem_add_host", 00:04:22.992 "nvmf_subsystem_remove_ns", 00:04:22.992 "nvmf_subsystem_add_ns", 00:04:22.992 "nvmf_subsystem_listener_set_ana_state", 00:04:22.992 "nvmf_discovery_get_referrals", 00:04:22.992 "nvmf_discovery_remove_referral", 00:04:22.992 "nvmf_discovery_add_referral", 00:04:22.992 "nvmf_subsystem_remove_listener", 00:04:22.992 "nvmf_subsystem_add_listener", 00:04:22.992 "nvmf_delete_subsystem", 00:04:22.992 "nvmf_create_subsystem", 00:04:22.992 "nvmf_get_subsystems", 00:04:22.992 "env_dpdk_get_mem_stats", 00:04:22.992 "nbd_get_disks", 00:04:22.992 "nbd_stop_disk", 00:04:22.992 "nbd_start_disk", 00:04:22.992 "ublk_recover_disk", 00:04:22.992 "ublk_get_disks", 00:04:22.992 "ublk_stop_disk", 00:04:22.992 "ublk_start_disk", 00:04:22.992 "ublk_destroy_target", 00:04:22.992 "ublk_create_target", 00:04:22.992 "virtio_blk_create_transport", 00:04:22.992 "virtio_blk_get_transports", 00:04:22.992 "vhost_controller_set_coalescing", 00:04:22.992 "vhost_get_controllers", 00:04:22.992 "vhost_delete_controller", 00:04:22.992 "vhost_create_blk_controller", 00:04:22.992 "vhost_scsi_controller_remove_target", 00:04:22.992 "vhost_scsi_controller_add_target", 00:04:22.992 "vhost_start_scsi_controller", 00:04:22.992 "vhost_create_scsi_controller", 00:04:22.992 "thread_set_cpumask", 00:04:22.992 "framework_get_scheduler", 00:04:22.992 "framework_set_scheduler", 00:04:22.992 "framework_get_reactors", 00:04:22.992 "thread_get_io_channels", 00:04:22.992 "thread_get_pollers", 00:04:22.992 "thread_get_stats", 00:04:22.992 "framework_monitor_context_switch", 00:04:22.992 "spdk_kill_instance", 00:04:22.992 "log_enable_timestamps", 00:04:22.992 "log_get_flags", 00:04:22.992 "log_clear_flag", 00:04:22.992 "log_set_flag", 00:04:22.992 "log_get_level", 00:04:22.992 "log_set_level", 00:04:22.992 "log_get_print_level", 00:04:22.992 "log_set_print_level", 00:04:22.992 "framework_enable_cpumask_locks", 00:04:22.992 "framework_disable_cpumask_locks", 00:04:22.992 "framework_wait_init", 00:04:22.992 "framework_start_init", 00:04:22.992 "scsi_get_devices", 00:04:22.992 "bdev_get_histogram", 00:04:22.992 "bdev_enable_histogram", 00:04:22.992 "bdev_set_qos_limit", 00:04:22.992 "bdev_set_qd_sampling_period", 00:04:22.992 "bdev_get_bdevs", 00:04:22.992 "bdev_reset_iostat", 00:04:22.992 "bdev_get_iostat", 00:04:22.992 "bdev_examine", 00:04:22.992 "bdev_wait_for_examine", 00:04:22.992 "bdev_set_options", 00:04:22.992 "notify_get_notifications", 00:04:22.992 "notify_get_types", 00:04:22.992 "accel_get_stats", 00:04:22.992 "accel_set_options", 00:04:22.992 "accel_set_driver", 00:04:22.992 "accel_crypto_key_destroy", 00:04:22.992 "accel_crypto_keys_get", 00:04:22.992 "accel_crypto_key_create", 00:04:22.992 "accel_assign_opc", 00:04:22.992 "accel_get_module_info", 00:04:22.992 "accel_get_opc_assignments", 00:04:22.992 "vmd_rescan", 00:04:22.992 "vmd_remove_device", 00:04:22.992 "vmd_enable", 00:04:22.992 "sock_set_default_impl", 00:04:22.992 "sock_impl_set_options", 00:04:22.992 "sock_impl_get_options", 00:04:22.992 "iobuf_get_stats", 00:04:22.992 "iobuf_set_options", 00:04:22.992 "framework_get_pci_devices", 00:04:22.992 "framework_get_config", 00:04:22.992 "framework_get_subsystems", 00:04:22.992 "trace_get_info", 00:04:22.992 "trace_get_tpoint_group_mask", 00:04:22.992 "trace_disable_tpoint_group", 00:04:22.992 "trace_enable_tpoint_group", 00:04:22.992 "trace_clear_tpoint_mask", 00:04:22.992 "trace_set_tpoint_mask", 00:04:22.992 "spdk_get_version", 00:04:22.992 "rpc_get_methods" 00:04:22.992 ] 00:04:22.992 01:12:14 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:22.992 01:12:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:22.992 01:12:14 -- common/autotest_common.sh@10 -- # set +x 00:04:22.992 01:12:14 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:22.993 01:12:14 -- spdkcli/tcp.sh@38 -- # killprocess 514075 00:04:22.993 01:12:14 -- common/autotest_common.sh@926 -- # '[' -z 514075 ']' 00:04:22.993 01:12:14 -- common/autotest_common.sh@930 -- # kill -0 514075 00:04:22.993 01:12:14 -- common/autotest_common.sh@931 -- # uname 00:04:22.993 01:12:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:22.993 01:12:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 514075 00:04:22.993 01:12:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:22.993 01:12:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:22.993 01:12:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 514075' 00:04:22.993 killing process with pid 514075 00:04:22.993 01:12:14 -- common/autotest_common.sh@945 -- # kill 514075 00:04:22.993 01:12:14 -- common/autotest_common.sh@950 -- # wait 514075 00:04:23.559 00:04:23.559 real 0m1.757s 00:04:23.559 user 0m3.367s 00:04:23.559 sys 0m0.476s 00:04:23.559 01:12:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.559 01:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:23.559 ************************************ 00:04:23.559 END TEST spdkcli_tcp 00:04:23.559 ************************************ 00:04:23.559 01:12:15 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.559 01:12:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:23.559 01:12:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.559 01:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:23.559 ************************************ 00:04:23.559 START TEST dpdk_mem_utility 00:04:23.559 ************************************ 00:04:23.559 01:12:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.559 * Looking for test storage... 00:04:23.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:23.559 01:12:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:23.559 01:12:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=514311 00:04:23.559 01:12:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.559 01:12:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 514311 00:04:23.559 01:12:15 -- common/autotest_common.sh@819 -- # '[' -z 514311 ']' 00:04:23.559 01:12:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.559 01:12:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:23.559 01:12:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.559 01:12:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:23.559 01:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:23.559 [2024-07-27 01:12:15.306860] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:23.559 [2024-07-27 01:12:15.306949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514311 ] 00:04:23.819 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.819 [2024-07-27 01:12:15.366465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.819 [2024-07-27 01:12:15.471817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:23.819 [2024-07-27 01:12:15.471979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.757 01:12:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:24.757 01:12:16 -- common/autotest_common.sh@852 -- # return 0 00:04:24.757 01:12:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:24.757 01:12:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:24.757 01:12:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:24.757 01:12:16 -- common/autotest_common.sh@10 -- # set +x 00:04:24.757 { 00:04:24.757 "filename": "/tmp/spdk_mem_dump.txt" 00:04:24.757 } 00:04:24.757 01:12:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:24.757 01:12:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.757 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:24.757 1 heaps totaling size 814.000000 MiB 00:04:24.757 size: 814.000000 MiB heap id: 0 00:04:24.757 end heaps---------- 00:04:24.757 8 mempools totaling size 598.116089 MiB 00:04:24.757 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:24.757 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:24.757 size: 84.521057 MiB name: bdev_io_514311 00:04:24.757 size: 51.011292 MiB name: evtpool_514311 00:04:24.757 size: 50.003479 MiB name: msgpool_514311 00:04:24.757 size: 21.763794 MiB name: PDU_Pool 00:04:24.757 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:24.757 size: 0.026123 MiB name: Session_Pool 00:04:24.757 end mempools------- 00:04:24.757 6 memzones totaling size 4.142822 MiB 00:04:24.757 size: 1.000366 MiB name: RG_ring_0_514311 00:04:24.757 size: 1.000366 MiB name: RG_ring_1_514311 00:04:24.757 size: 1.000366 MiB name: RG_ring_4_514311 00:04:24.757 size: 1.000366 MiB name: RG_ring_5_514311 00:04:24.757 size: 0.125366 MiB name: RG_ring_2_514311 00:04:24.757 size: 0.015991 MiB name: RG_ring_3_514311 00:04:24.757 end memzones------- 00:04:24.757 01:12:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:24.757 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:24.757 list of free elements. size: 12.519348 MiB 00:04:24.757 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:24.757 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:24.757 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:24.757 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:24.757 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:24.757 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:24.757 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:24.757 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:24.757 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:24.757 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:24.757 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:24.757 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:24.757 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:24.757 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:24.757 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:24.757 list of standard malloc elements. size: 199.218079 MiB 00:04:24.757 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:24.757 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:24.757 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:24.757 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:24.757 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:24.757 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:24.757 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:24.757 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:24.757 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:24.757 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:24.757 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:24.757 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:24.757 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:24.757 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:24.757 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:24.757 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:24.757 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:24.757 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:24.757 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:24.757 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:24.757 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:24.757 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:24.757 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:24.757 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:24.757 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:24.757 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:24.757 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:24.757 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:24.757 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:24.758 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:24.758 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:24.758 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:24.758 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:24.758 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:24.758 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:24.758 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:24.758 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:24.758 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:24.758 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:24.758 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:24.758 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:24.758 list of memzone associated elements. size: 602.262573 MiB 00:04:24.758 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:24.758 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:24.758 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:24.758 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:24.758 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:24.758 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_514311_0 00:04:24.758 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:24.758 associated memzone info: size: 48.002930 MiB name: MP_evtpool_514311_0 00:04:24.758 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:24.758 associated memzone info: size: 48.002930 MiB name: MP_msgpool_514311_0 00:04:24.758 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:24.758 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:24.758 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:24.758 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:24.758 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:24.758 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_514311 00:04:24.758 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:24.758 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_514311 00:04:24.758 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:24.758 associated memzone info: size: 1.007996 MiB name: MP_evtpool_514311 00:04:24.758 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:24.758 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:24.758 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:24.758 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:24.758 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:24.758 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:24.758 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:24.758 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:24.758 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:24.758 associated memzone info: size: 1.000366 MiB name: RG_ring_0_514311 00:04:24.758 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:24.758 associated memzone info: size: 1.000366 MiB name: RG_ring_1_514311 00:04:24.758 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:24.758 associated memzone info: size: 1.000366 MiB name: RG_ring_4_514311 00:04:24.758 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:24.758 associated memzone info: size: 1.000366 MiB name: RG_ring_5_514311 00:04:24.758 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:24.758 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_514311 00:04:24.758 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:24.758 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:24.758 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:24.758 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:24.758 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:24.758 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:24.758 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:24.758 associated memzone info: size: 0.125366 MiB name: RG_ring_2_514311 00:04:24.758 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:24.758 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:24.758 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:24.758 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:24.758 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:24.758 associated memzone info: size: 0.015991 MiB name: RG_ring_3_514311 00:04:24.758 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:24.758 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:24.758 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:24.758 associated memzone info: size: 0.000183 MiB name: MP_msgpool_514311 00:04:24.758 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:24.758 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_514311 00:04:24.758 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:24.758 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:24.758 01:12:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:24.758 01:12:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 514311 00:04:24.758 01:12:16 -- common/autotest_common.sh@926 -- # '[' -z 514311 ']' 00:04:24.758 01:12:16 -- common/autotest_common.sh@930 -- # kill -0 514311 00:04:24.758 01:12:16 -- common/autotest_common.sh@931 -- # uname 00:04:24.758 01:12:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:24.758 01:12:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 514311 00:04:24.758 01:12:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:24.758 01:12:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:24.758 01:12:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 514311' 00:04:24.758 killing process with pid 514311 00:04:24.758 01:12:16 -- common/autotest_common.sh@945 -- # kill 514311 00:04:24.758 01:12:16 -- common/autotest_common.sh@950 -- # wait 514311 00:04:25.327 00:04:25.327 real 0m1.626s 00:04:25.327 user 0m1.776s 00:04:25.327 sys 0m0.443s 00:04:25.327 01:12:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.327 01:12:16 -- common/autotest_common.sh@10 -- # set +x 00:04:25.327 ************************************ 00:04:25.327 END TEST dpdk_mem_utility 00:04:25.327 ************************************ 00:04:25.327 01:12:16 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.327 01:12:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.327 01:12:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.327 01:12:16 -- common/autotest_common.sh@10 -- # set +x 00:04:25.327 ************************************ 00:04:25.327 START TEST event 00:04:25.327 ************************************ 00:04:25.327 01:12:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.327 * Looking for test storage... 00:04:25.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:25.327 01:12:16 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:25.327 01:12:16 -- bdev/nbd_common.sh@6 -- # set -e 00:04:25.327 01:12:16 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.327 01:12:16 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:25.327 01:12:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.327 01:12:16 -- common/autotest_common.sh@10 -- # set +x 00:04:25.327 ************************************ 00:04:25.327 START TEST event_perf 00:04:25.327 ************************************ 00:04:25.327 01:12:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.327 Running I/O for 1 seconds...[2024-07-27 01:12:16.932615] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:25.327 [2024-07-27 01:12:16.932696] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514604 ] 00:04:25.327 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.327 [2024-07-27 01:12:16.997886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.586 [2024-07-27 01:12:17.114693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.586 [2024-07-27 01:12:17.114759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.586 [2024-07-27 01:12:17.114848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:25.586 [2024-07-27 01:12:17.114851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.521 Running I/O for 1 seconds... 00:04:26.521 lcore 0: 232553 00:04:26.521 lcore 1: 232551 00:04:26.521 lcore 2: 232552 00:04:26.521 lcore 3: 232553 00:04:26.521 done. 00:04:26.521 00:04:26.521 real 0m1.321s 00:04:26.521 user 0m4.227s 00:04:26.521 sys 0m0.089s 00:04:26.521 01:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.521 01:12:18 -- common/autotest_common.sh@10 -- # set +x 00:04:26.521 ************************************ 00:04:26.521 END TEST event_perf 00:04:26.521 ************************************ 00:04:26.521 01:12:18 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.522 01:12:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:26.522 01:12:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.522 01:12:18 -- common/autotest_common.sh@10 -- # set +x 00:04:26.522 ************************************ 00:04:26.522 START TEST event_reactor 00:04:26.522 ************************************ 00:04:26.522 01:12:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.781 [2024-07-27 01:12:18.281722] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:26.781 [2024-07-27 01:12:18.281810] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514761 ] 00:04:26.781 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.781 [2024-07-27 01:12:18.344659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.781 [2024-07-27 01:12:18.459890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.159 test_start 00:04:28.159 oneshot 00:04:28.159 tick 100 00:04:28.159 tick 100 00:04:28.159 tick 250 00:04:28.159 tick 100 00:04:28.159 tick 100 00:04:28.159 tick 250 00:04:28.159 tick 500 00:04:28.159 tick 100 00:04:28.159 tick 100 00:04:28.159 tick 100 00:04:28.159 tick 250 00:04:28.159 tick 100 00:04:28.159 tick 100 00:04:28.159 test_end 00:04:28.159 00:04:28.159 real 0m1.311s 00:04:28.159 user 0m1.226s 00:04:28.159 sys 0m0.080s 00:04:28.159 01:12:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.159 01:12:19 -- common/autotest_common.sh@10 -- # set +x 00:04:28.159 ************************************ 00:04:28.159 END TEST event_reactor 00:04:28.159 ************************************ 00:04:28.159 01:12:19 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.159 01:12:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:28.159 01:12:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.159 01:12:19 -- common/autotest_common.sh@10 -- # set +x 00:04:28.159 ************************************ 00:04:28.159 START TEST event_reactor_perf 00:04:28.159 ************************************ 00:04:28.159 01:12:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.159 [2024-07-27 01:12:19.620581] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:28.159 [2024-07-27 01:12:19.620670] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514932 ] 00:04:28.159 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.159 [2024-07-27 01:12:19.682601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.159 [2024-07-27 01:12:19.798593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.542 test_start 00:04:29.542 test_end 00:04:29.542 Performance: 349241 events per second 00:04:29.542 00:04:29.542 real 0m1.313s 00:04:29.542 user 0m1.223s 00:04:29.542 sys 0m0.084s 00:04:29.542 01:12:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.542 01:12:20 -- common/autotest_common.sh@10 -- # set +x 00:04:29.542 ************************************ 00:04:29.542 END TEST event_reactor_perf 00:04:29.542 ************************************ 00:04:29.542 01:12:20 -- event/event.sh@49 -- # uname -s 00:04:29.542 01:12:20 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:29.542 01:12:20 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.542 01:12:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.542 01:12:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.542 01:12:20 -- common/autotest_common.sh@10 -- # set +x 00:04:29.542 ************************************ 00:04:29.542 START TEST event_scheduler 00:04:29.542 ************************************ 00:04:29.542 01:12:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.542 * Looking for test storage... 00:04:29.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:29.542 01:12:20 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:29.542 01:12:20 -- scheduler/scheduler.sh@35 -- # scheduler_pid=515207 00:04:29.542 01:12:20 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:29.542 01:12:20 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.542 01:12:20 -- scheduler/scheduler.sh@37 -- # waitforlisten 515207 00:04:29.542 01:12:20 -- common/autotest_common.sh@819 -- # '[' -z 515207 ']' 00:04:29.543 01:12:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.543 01:12:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:29.543 01:12:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.543 01:12:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:29.543 01:12:20 -- common/autotest_common.sh@10 -- # set +x 00:04:29.543 [2024-07-27 01:12:21.033400] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:29.543 [2024-07-27 01:12:21.033493] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515207 ] 00:04:29.543 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.543 [2024-07-27 01:12:21.092487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.543 [2024-07-27 01:12:21.199090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.543 [2024-07-27 01:12:21.199150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.543 [2024-07-27 01:12:21.199215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.543 [2024-07-27 01:12:21.199218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.543 01:12:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:29.543 01:12:21 -- common/autotest_common.sh@852 -- # return 0 00:04:29.543 01:12:21 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:29.543 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.543 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.543 POWER: Env isn't set yet! 00:04:29.543 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:29.543 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:29.543 POWER: Cannot get available frequencies of lcore 0 00:04:29.543 POWER: Attempting to initialise PSTAT power management... 00:04:29.543 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:29.543 POWER: Initialized successfully for lcore 0 power management 00:04:29.543 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:29.543 POWER: Initialized successfully for lcore 1 power management 00:04:29.543 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:29.543 POWER: Initialized successfully for lcore 2 power management 00:04:29.543 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:29.543 POWER: Initialized successfully for lcore 3 power management 00:04:29.543 [2024-07-27 01:12:21.278246] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:29.543 [2024-07-27 01:12:21.278264] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:29.543 [2024-07-27 01:12:21.278275] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:29.543 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.543 01:12:21 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:29.543 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.543 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.803 [2024-07-27 01:12:21.371033] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:29.803 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.803 01:12:21 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:29.803 01:12:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.803 01:12:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.803 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.803 ************************************ 00:04:29.803 START TEST scheduler_create_thread 00:04:29.803 ************************************ 00:04:29.803 01:12:21 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:29.803 01:12:21 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:29.803 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.803 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.803 2 00:04:29.803 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.803 01:12:21 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:29.803 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.803 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.803 3 00:04:29.803 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.803 01:12:21 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:29.803 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.803 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.803 4 00:04:29.803 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.803 01:12:21 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:29.803 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.803 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.803 5 00:04:29.803 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.803 01:12:21 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:29.803 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.803 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.803 6 00:04:29.803 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.803 01:12:21 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:29.804 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.804 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.804 7 00:04:29.804 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.804 01:12:21 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:29.804 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.804 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.804 8 00:04:29.804 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.804 01:12:21 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:29.804 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.804 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.804 9 00:04:29.804 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.804 01:12:21 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:29.804 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.804 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.804 10 00:04:29.804 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.804 01:12:21 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:29.804 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.804 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.804 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.804 01:12:21 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:29.804 01:12:21 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:29.804 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.804 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.804 01:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:29.804 01:12:21 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:29.804 01:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:29.804 01:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.371 01:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:30.371 01:12:22 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:30.371 01:12:22 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:30.371 01:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:30.371 01:12:22 -- common/autotest_common.sh@10 -- # set +x 00:04:31.752 01:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.752 00:04:31.752 real 0m1.753s 00:04:31.752 user 0m0.010s 00:04:31.752 sys 0m0.004s 00:04:31.752 01:12:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.752 01:12:23 -- common/autotest_common.sh@10 -- # set +x 00:04:31.752 ************************************ 00:04:31.752 END TEST scheduler_create_thread 00:04:31.752 ************************************ 00:04:31.752 01:12:23 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:31.752 01:12:23 -- scheduler/scheduler.sh@46 -- # killprocess 515207 00:04:31.752 01:12:23 -- common/autotest_common.sh@926 -- # '[' -z 515207 ']' 00:04:31.752 01:12:23 -- common/autotest_common.sh@930 -- # kill -0 515207 00:04:31.752 01:12:23 -- common/autotest_common.sh@931 -- # uname 00:04:31.752 01:12:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:31.752 01:12:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 515207 00:04:31.752 01:12:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:31.752 01:12:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:31.752 01:12:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 515207' 00:04:31.752 killing process with pid 515207 00:04:31.752 01:12:23 -- common/autotest_common.sh@945 -- # kill 515207 00:04:31.752 01:12:23 -- common/autotest_common.sh@950 -- # wait 515207 00:04:32.011 [2024-07-27 01:12:23.610416] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:32.011 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:32.011 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:32.011 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:32.011 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:32.011 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:32.011 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:32.011 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:32.011 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:32.270 00:04:32.270 real 0m2.917s 00:04:32.270 user 0m3.769s 00:04:32.270 sys 0m0.291s 00:04:32.270 01:12:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.270 01:12:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.270 ************************************ 00:04:32.270 END TEST event_scheduler 00:04:32.270 ************************************ 00:04:32.270 01:12:23 -- event/event.sh@51 -- # modprobe -n nbd 00:04:32.270 01:12:23 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:32.270 01:12:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.270 01:12:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.270 01:12:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.270 ************************************ 00:04:32.270 START TEST app_repeat 00:04:32.270 ************************************ 00:04:32.270 01:12:23 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:32.270 01:12:23 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.270 01:12:23 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.270 01:12:23 -- event/event.sh@13 -- # local nbd_list 00:04:32.270 01:12:23 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.270 01:12:23 -- event/event.sh@14 -- # local bdev_list 00:04:32.270 01:12:23 -- event/event.sh@15 -- # local repeat_times=4 00:04:32.270 01:12:23 -- event/event.sh@17 -- # modprobe nbd 00:04:32.270 01:12:23 -- event/event.sh@19 -- # repeat_pid=515575 00:04:32.270 01:12:23 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:32.270 01:12:23 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.270 01:12:23 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 515575' 00:04:32.270 Process app_repeat pid: 515575 00:04:32.270 01:12:23 -- event/event.sh@23 -- # for i in {0..2} 00:04:32.270 01:12:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:32.271 spdk_app_start Round 0 00:04:32.271 01:12:23 -- event/event.sh@25 -- # waitforlisten 515575 /var/tmp/spdk-nbd.sock 00:04:32.271 01:12:23 -- common/autotest_common.sh@819 -- # '[' -z 515575 ']' 00:04:32.271 01:12:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.271 01:12:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:32.271 01:12:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.271 01:12:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:32.271 01:12:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.271 [2024-07-27 01:12:23.915220] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:32.271 [2024-07-27 01:12:23.915288] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515575 ] 00:04:32.271 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.271 [2024-07-27 01:12:23.973273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.529 [2024-07-27 01:12:24.084195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.529 [2024-07-27 01:12:24.084198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.463 01:12:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:33.463 01:12:24 -- common/autotest_common.sh@852 -- # return 0 00:04:33.463 01:12:24 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.463 Malloc0 00:04:33.463 01:12:25 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.721 Malloc1 00:04:33.721 01:12:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@12 -- # local i 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.721 01:12:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.979 /dev/nbd0 00:04:33.979 01:12:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.979 01:12:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.979 01:12:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:33.979 01:12:25 -- common/autotest_common.sh@857 -- # local i 00:04:33.979 01:12:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:33.979 01:12:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:33.979 01:12:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:33.979 01:12:25 -- common/autotest_common.sh@861 -- # break 00:04:33.979 01:12:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:33.979 01:12:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:33.979 01:12:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.979 1+0 records in 00:04:33.979 1+0 records out 00:04:33.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186045 s, 22.0 MB/s 00:04:33.979 01:12:25 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.979 01:12:25 -- common/autotest_common.sh@874 -- # size=4096 00:04:33.979 01:12:25 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.979 01:12:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:33.979 01:12:25 -- common/autotest_common.sh@877 -- # return 0 00:04:33.979 01:12:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.979 01:12:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.979 01:12:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.237 /dev/nbd1 00:04:34.237 01:12:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.237 01:12:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.237 01:12:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:34.237 01:12:25 -- common/autotest_common.sh@857 -- # local i 00:04:34.237 01:12:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:34.237 01:12:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:34.237 01:12:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:34.237 01:12:25 -- common/autotest_common.sh@861 -- # break 00:04:34.237 01:12:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:34.237 01:12:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:34.237 01:12:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.237 1+0 records in 00:04:34.237 1+0 records out 00:04:34.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227622 s, 18.0 MB/s 00:04:34.237 01:12:25 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.237 01:12:25 -- common/autotest_common.sh@874 -- # size=4096 00:04:34.237 01:12:25 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.237 01:12:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:34.237 01:12:25 -- common/autotest_common.sh@877 -- # return 0 00:04:34.237 01:12:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.237 01:12:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.237 01:12:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.237 01:12:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.237 01:12:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.495 { 00:04:34.495 "nbd_device": "/dev/nbd0", 00:04:34.495 "bdev_name": "Malloc0" 00:04:34.495 }, 00:04:34.495 { 00:04:34.495 "nbd_device": "/dev/nbd1", 00:04:34.495 "bdev_name": "Malloc1" 00:04:34.495 } 00:04:34.495 ]' 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.495 { 00:04:34.495 "nbd_device": "/dev/nbd0", 00:04:34.495 "bdev_name": "Malloc0" 00:04:34.495 }, 00:04:34.495 { 00:04:34.495 "nbd_device": "/dev/nbd1", 00:04:34.495 "bdev_name": "Malloc1" 00:04:34.495 } 00:04:34.495 ]' 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.495 /dev/nbd1' 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.495 /dev/nbd1' 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.495 256+0 records in 00:04:34.495 256+0 records out 00:04:34.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497921 s, 211 MB/s 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.495 01:12:26 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.753 256+0 records in 00:04:34.753 256+0 records out 00:04:34.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245263 s, 42.8 MB/s 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.753 256+0 records in 00:04:34.753 256+0 records out 00:04:34.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226983 s, 46.2 MB/s 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@51 -- # local i 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.753 01:12:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@41 -- # break 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.011 01:12:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@41 -- # break 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.268 01:12:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@65 -- # true 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.526 01:12:27 -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.526 01:12:27 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.784 01:12:27 -- event/event.sh@35 -- # sleep 3 00:04:36.042 [2024-07-27 01:12:27.653797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.042 [2024-07-27 01:12:27.772779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.042 [2024-07-27 01:12:27.772779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.304 [2024-07-27 01:12:27.833884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.304 [2024-07-27 01:12:27.833969] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.876 01:12:30 -- event/event.sh@23 -- # for i in {0..2} 00:04:38.876 01:12:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:38.876 spdk_app_start Round 1 00:04:38.876 01:12:30 -- event/event.sh@25 -- # waitforlisten 515575 /var/tmp/spdk-nbd.sock 00:04:38.876 01:12:30 -- common/autotest_common.sh@819 -- # '[' -z 515575 ']' 00:04:38.876 01:12:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.876 01:12:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:38.876 01:12:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.876 01:12:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:38.876 01:12:30 -- common/autotest_common.sh@10 -- # set +x 00:04:38.876 01:12:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:38.876 01:12:30 -- common/autotest_common.sh@852 -- # return 0 00:04:38.876 01:12:30 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.134 Malloc0 00:04:39.134 01:12:30 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.392 Malloc1 00:04:39.392 01:12:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@12 -- # local i 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.392 01:12:31 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.650 /dev/nbd0 00:04:39.650 01:12:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.650 01:12:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.650 01:12:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:39.650 01:12:31 -- common/autotest_common.sh@857 -- # local i 00:04:39.650 01:12:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:39.650 01:12:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:39.650 01:12:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:39.650 01:12:31 -- common/autotest_common.sh@861 -- # break 00:04:39.650 01:12:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:39.650 01:12:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:39.651 01:12:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.651 1+0 records in 00:04:39.651 1+0 records out 00:04:39.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181394 s, 22.6 MB/s 00:04:39.651 01:12:31 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.651 01:12:31 -- common/autotest_common.sh@874 -- # size=4096 00:04:39.651 01:12:31 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.651 01:12:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:39.651 01:12:31 -- common/autotest_common.sh@877 -- # return 0 00:04:39.651 01:12:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.651 01:12:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.651 01:12:31 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.909 /dev/nbd1 00:04:39.909 01:12:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.909 01:12:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.909 01:12:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:39.909 01:12:31 -- common/autotest_common.sh@857 -- # local i 00:04:39.909 01:12:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:39.909 01:12:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:39.909 01:12:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:39.909 01:12:31 -- common/autotest_common.sh@861 -- # break 00:04:39.909 01:12:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:39.909 01:12:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:39.909 01:12:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.909 1+0 records in 00:04:39.909 1+0 records out 00:04:39.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016513 s, 24.8 MB/s 00:04:39.909 01:12:31 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.909 01:12:31 -- common/autotest_common.sh@874 -- # size=4096 00:04:39.909 01:12:31 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.909 01:12:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:39.909 01:12:31 -- common/autotest_common.sh@877 -- # return 0 00:04:39.909 01:12:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.909 01:12:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.909 01:12:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.909 01:12:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.909 01:12:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.167 01:12:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.167 { 00:04:40.167 "nbd_device": "/dev/nbd0", 00:04:40.167 "bdev_name": "Malloc0" 00:04:40.167 }, 00:04:40.167 { 00:04:40.167 "nbd_device": "/dev/nbd1", 00:04:40.167 "bdev_name": "Malloc1" 00:04:40.167 } 00:04:40.167 ]' 00:04:40.167 01:12:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.167 { 00:04:40.167 "nbd_device": "/dev/nbd0", 00:04:40.167 "bdev_name": "Malloc0" 00:04:40.167 }, 00:04:40.167 { 00:04:40.167 "nbd_device": "/dev/nbd1", 00:04:40.167 "bdev_name": "Malloc1" 00:04:40.167 } 00:04:40.167 ]' 00:04:40.167 01:12:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.425 /dev/nbd1' 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.425 /dev/nbd1' 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.425 256+0 records in 00:04:40.425 256+0 records out 00:04:40.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049747 s, 211 MB/s 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.425 256+0 records in 00:04:40.425 256+0 records out 00:04:40.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268552 s, 39.0 MB/s 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.425 01:12:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.425 256+0 records in 00:04:40.425 256+0 records out 00:04:40.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270443 s, 38.8 MB/s 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@51 -- # local i 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.425 01:12:32 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@41 -- # break 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.683 01:12:32 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@41 -- # break 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.940 01:12:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.197 01:12:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.197 01:12:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.197 01:12:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.197 01:12:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.198 01:12:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.198 01:12:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.198 01:12:32 -- bdev/nbd_common.sh@65 -- # true 00:04:41.198 01:12:32 -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.198 01:12:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.198 01:12:32 -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.198 01:12:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.198 01:12:32 -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.198 01:12:32 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.457 01:12:33 -- event/event.sh@35 -- # sleep 3 00:04:41.716 [2024-07-27 01:12:33.343538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.716 [2024-07-27 01:12:33.458517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.716 [2024-07-27 01:12:33.458521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.976 [2024-07-27 01:12:33.519623] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.976 [2024-07-27 01:12:33.519707] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.512 01:12:36 -- event/event.sh@23 -- # for i in {0..2} 00:04:44.512 01:12:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:44.512 spdk_app_start Round 2 00:04:44.512 01:12:36 -- event/event.sh@25 -- # waitforlisten 515575 /var/tmp/spdk-nbd.sock 00:04:44.512 01:12:36 -- common/autotest_common.sh@819 -- # '[' -z 515575 ']' 00:04:44.512 01:12:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.512 01:12:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.512 01:12:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.512 01:12:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.512 01:12:36 -- common/autotest_common.sh@10 -- # set +x 00:04:44.769 01:12:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:44.769 01:12:36 -- common/autotest_common.sh@852 -- # return 0 00:04:44.769 01:12:36 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.028 Malloc0 00:04:45.028 01:12:36 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.288 Malloc1 00:04:45.288 01:12:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@12 -- # local i 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.288 01:12:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.548 /dev/nbd0 00:04:45.548 01:12:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.548 01:12:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.548 01:12:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:45.548 01:12:37 -- common/autotest_common.sh@857 -- # local i 00:04:45.548 01:12:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:45.548 01:12:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:45.548 01:12:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:45.548 01:12:37 -- common/autotest_common.sh@861 -- # break 00:04:45.548 01:12:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:45.548 01:12:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:45.548 01:12:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.548 1+0 records in 00:04:45.548 1+0 records out 00:04:45.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198265 s, 20.7 MB/s 00:04:45.548 01:12:37 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.548 01:12:37 -- common/autotest_common.sh@874 -- # size=4096 00:04:45.548 01:12:37 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.548 01:12:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:45.548 01:12:37 -- common/autotest_common.sh@877 -- # return 0 00:04:45.548 01:12:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.548 01:12:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.548 01:12:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.806 /dev/nbd1 00:04:45.806 01:12:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.806 01:12:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.806 01:12:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:45.806 01:12:37 -- common/autotest_common.sh@857 -- # local i 00:04:45.806 01:12:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:45.806 01:12:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:45.806 01:12:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:45.806 01:12:37 -- common/autotest_common.sh@861 -- # break 00:04:45.806 01:12:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:45.806 01:12:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:45.806 01:12:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.806 1+0 records in 00:04:45.806 1+0 records out 00:04:45.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246788 s, 16.6 MB/s 00:04:45.806 01:12:37 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.806 01:12:37 -- common/autotest_common.sh@874 -- # size=4096 00:04:45.806 01:12:37 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.806 01:12:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:45.806 01:12:37 -- common/autotest_common.sh@877 -- # return 0 00:04:45.806 01:12:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.806 01:12:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.806 01:12:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.806 01:12:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.806 01:12:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.065 { 00:04:46.065 "nbd_device": "/dev/nbd0", 00:04:46.065 "bdev_name": "Malloc0" 00:04:46.065 }, 00:04:46.065 { 00:04:46.065 "nbd_device": "/dev/nbd1", 00:04:46.065 "bdev_name": "Malloc1" 00:04:46.065 } 00:04:46.065 ]' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.065 { 00:04:46.065 "nbd_device": "/dev/nbd0", 00:04:46.065 "bdev_name": "Malloc0" 00:04:46.065 }, 00:04:46.065 { 00:04:46.065 "nbd_device": "/dev/nbd1", 00:04:46.065 "bdev_name": "Malloc1" 00:04:46.065 } 00:04:46.065 ]' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.065 /dev/nbd1' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.065 /dev/nbd1' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.065 256+0 records in 00:04:46.065 256+0 records out 00:04:46.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00383302 s, 274 MB/s 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.065 256+0 records in 00:04:46.065 256+0 records out 00:04:46.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244847 s, 42.8 MB/s 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.065 256+0 records in 00:04:46.065 256+0 records out 00:04:46.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289026 s, 36.3 MB/s 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@51 -- # local i 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.065 01:12:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@41 -- # break 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.323 01:12:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@41 -- # break 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.581 01:12:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@65 -- # true 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.839 01:12:38 -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.839 01:12:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.097 01:12:38 -- event/event.sh@35 -- # sleep 3 00:04:47.355 [2024-07-27 01:12:39.056270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.614 [2024-07-27 01:12:39.167203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.614 [2024-07-27 01:12:39.167207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.614 [2024-07-27 01:12:39.227624] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.614 [2024-07-27 01:12:39.227708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.143 01:12:41 -- event/event.sh@38 -- # waitforlisten 515575 /var/tmp/spdk-nbd.sock 00:04:50.143 01:12:41 -- common/autotest_common.sh@819 -- # '[' -z 515575 ']' 00:04:50.143 01:12:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.143 01:12:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.143 01:12:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.143 01:12:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.143 01:12:41 -- common/autotest_common.sh@10 -- # set +x 00:04:50.401 01:12:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:50.401 01:12:42 -- common/autotest_common.sh@852 -- # return 0 00:04:50.401 01:12:42 -- event/event.sh@39 -- # killprocess 515575 00:04:50.401 01:12:42 -- common/autotest_common.sh@926 -- # '[' -z 515575 ']' 00:04:50.401 01:12:42 -- common/autotest_common.sh@930 -- # kill -0 515575 00:04:50.401 01:12:42 -- common/autotest_common.sh@931 -- # uname 00:04:50.401 01:12:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:50.401 01:12:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 515575 00:04:50.401 01:12:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:50.401 01:12:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:50.401 01:12:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 515575' 00:04:50.401 killing process with pid 515575 00:04:50.401 01:12:42 -- common/autotest_common.sh@945 -- # kill 515575 00:04:50.401 01:12:42 -- common/autotest_common.sh@950 -- # wait 515575 00:04:50.660 spdk_app_start is called in Round 0. 00:04:50.660 Shutdown signal received, stop current app iteration 00:04:50.660 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:04:50.660 spdk_app_start is called in Round 1. 00:04:50.660 Shutdown signal received, stop current app iteration 00:04:50.660 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:04:50.660 spdk_app_start is called in Round 2. 00:04:50.660 Shutdown signal received, stop current app iteration 00:04:50.660 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:04:50.660 spdk_app_start is called in Round 3. 00:04:50.660 Shutdown signal received, stop current app iteration 00:04:50.660 01:12:42 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:50.660 01:12:42 -- event/event.sh@42 -- # return 0 00:04:50.660 00:04:50.660 real 0m18.413s 00:04:50.660 user 0m39.763s 00:04:50.660 sys 0m3.194s 00:04:50.660 01:12:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.660 01:12:42 -- common/autotest_common.sh@10 -- # set +x 00:04:50.660 ************************************ 00:04:50.660 END TEST app_repeat 00:04:50.660 ************************************ 00:04:50.660 01:12:42 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:50.660 01:12:42 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.660 01:12:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.660 01:12:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.660 01:12:42 -- common/autotest_common.sh@10 -- # set +x 00:04:50.660 ************************************ 00:04:50.660 START TEST cpu_locks 00:04:50.660 ************************************ 00:04:50.660 01:12:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.660 * Looking for test storage... 00:04:50.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:50.660 01:12:42 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:50.660 01:12:42 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:50.660 01:12:42 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:50.660 01:12:42 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:50.660 01:12:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.660 01:12:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.660 01:12:42 -- common/autotest_common.sh@10 -- # set +x 00:04:50.660 ************************************ 00:04:50.660 START TEST default_locks 00:04:50.660 ************************************ 00:04:50.660 01:12:42 -- common/autotest_common.sh@1104 -- # default_locks 00:04:50.660 01:12:42 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=518083 00:04:50.660 01:12:42 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.660 01:12:42 -- event/cpu_locks.sh@47 -- # waitforlisten 518083 00:04:50.660 01:12:42 -- common/autotest_common.sh@819 -- # '[' -z 518083 ']' 00:04:50.660 01:12:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.660 01:12:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.660 01:12:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.660 01:12:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.660 01:12:42 -- common/autotest_common.sh@10 -- # set +x 00:04:50.919 [2024-07-27 01:12:42.436490] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:50.919 [2024-07-27 01:12:42.436580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518083 ] 00:04:50.919 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.919 [2024-07-27 01:12:42.493333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.919 [2024-07-27 01:12:42.597432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.919 [2024-07-27 01:12:42.597633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.853 01:12:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:51.853 01:12:43 -- common/autotest_common.sh@852 -- # return 0 00:04:51.853 01:12:43 -- event/cpu_locks.sh@49 -- # locks_exist 518083 00:04:51.853 01:12:43 -- event/cpu_locks.sh@22 -- # lslocks -p 518083 00:04:51.853 01:12:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.112 lslocks: write error 00:04:52.112 01:12:43 -- event/cpu_locks.sh@50 -- # killprocess 518083 00:04:52.112 01:12:43 -- common/autotest_common.sh@926 -- # '[' -z 518083 ']' 00:04:52.112 01:12:43 -- common/autotest_common.sh@930 -- # kill -0 518083 00:04:52.112 01:12:43 -- common/autotest_common.sh@931 -- # uname 00:04:52.112 01:12:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:52.112 01:12:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 518083 00:04:52.112 01:12:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:52.112 01:12:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:52.112 01:12:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 518083' 00:04:52.112 killing process with pid 518083 00:04:52.112 01:12:43 -- common/autotest_common.sh@945 -- # kill 518083 00:04:52.112 01:12:43 -- common/autotest_common.sh@950 -- # wait 518083 00:04:52.679 01:12:44 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 518083 00:04:52.679 01:12:44 -- common/autotest_common.sh@640 -- # local es=0 00:04:52.679 01:12:44 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 518083 00:04:52.679 01:12:44 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:52.679 01:12:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:52.679 01:12:44 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:52.679 01:12:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:52.679 01:12:44 -- common/autotest_common.sh@643 -- # waitforlisten 518083 00:04:52.679 01:12:44 -- common/autotest_common.sh@819 -- # '[' -z 518083 ']' 00:04:52.679 01:12:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.679 01:12:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:52.679 01:12:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.679 01:12:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:52.679 01:12:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (518083) - No such process 00:04:52.679 ERROR: process (pid: 518083) is no longer running 00:04:52.679 01:12:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:52.679 01:12:44 -- common/autotest_common.sh@852 -- # return 1 00:04:52.679 01:12:44 -- common/autotest_common.sh@643 -- # es=1 00:04:52.679 01:12:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:52.679 01:12:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:52.679 01:12:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:52.679 01:12:44 -- event/cpu_locks.sh@54 -- # no_locks 00:04:52.679 01:12:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:52.679 01:12:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:52.679 01:12:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:52.679 00:04:52.679 real 0m1.821s 00:04:52.679 user 0m1.958s 00:04:52.679 sys 0m0.557s 00:04:52.679 01:12:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.679 01:12:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.679 ************************************ 00:04:52.679 END TEST default_locks 00:04:52.679 ************************************ 00:04:52.679 01:12:44 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:52.679 01:12:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.679 01:12:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.679 01:12:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.679 ************************************ 00:04:52.679 START TEST default_locks_via_rpc 00:04:52.679 ************************************ 00:04:52.679 01:12:44 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:04:52.679 01:12:44 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=518294 00:04:52.679 01:12:44 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.679 01:12:44 -- event/cpu_locks.sh@63 -- # waitforlisten 518294 00:04:52.679 01:12:44 -- common/autotest_common.sh@819 -- # '[' -z 518294 ']' 00:04:52.679 01:12:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.679 01:12:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:52.679 01:12:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.679 01:12:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:52.679 01:12:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.679 [2024-07-27 01:12:44.281131] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:52.679 [2024-07-27 01:12:44.281225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518294 ] 00:04:52.679 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.679 [2024-07-27 01:12:44.337812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.938 [2024-07-27 01:12:44.447578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.938 [2024-07-27 01:12:44.447733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.520 01:12:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:53.520 01:12:45 -- common/autotest_common.sh@852 -- # return 0 00:04:53.520 01:12:45 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:53.520 01:12:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.520 01:12:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.520 01:12:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.520 01:12:45 -- event/cpu_locks.sh@67 -- # no_locks 00:04:53.520 01:12:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:53.520 01:12:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:53.520 01:12:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:53.520 01:12:45 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:53.520 01:12:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.520 01:12:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.520 01:12:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.520 01:12:45 -- event/cpu_locks.sh@71 -- # locks_exist 518294 00:04:53.520 01:12:45 -- event/cpu_locks.sh@22 -- # lslocks -p 518294 00:04:53.520 01:12:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.824 01:12:45 -- event/cpu_locks.sh@73 -- # killprocess 518294 00:04:53.824 01:12:45 -- common/autotest_common.sh@926 -- # '[' -z 518294 ']' 00:04:53.824 01:12:45 -- common/autotest_common.sh@930 -- # kill -0 518294 00:04:53.824 01:12:45 -- common/autotest_common.sh@931 -- # uname 00:04:53.824 01:12:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:53.824 01:12:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 518294 00:04:53.824 01:12:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:53.824 01:12:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:53.824 01:12:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 518294' 00:04:53.824 killing process with pid 518294 00:04:53.824 01:12:45 -- common/autotest_common.sh@945 -- # kill 518294 00:04:53.824 01:12:45 -- common/autotest_common.sh@950 -- # wait 518294 00:04:54.392 00:04:54.392 real 0m1.703s 00:04:54.392 user 0m1.827s 00:04:54.392 sys 0m0.570s 00:04:54.392 01:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.392 01:12:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.392 ************************************ 00:04:54.392 END TEST default_locks_via_rpc 00:04:54.392 ************************************ 00:04:54.392 01:12:45 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:54.392 01:12:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.392 01:12:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.392 01:12:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.392 ************************************ 00:04:54.392 START TEST non_locking_app_on_locked_coremask 00:04:54.392 ************************************ 00:04:54.392 01:12:45 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:04:54.392 01:12:45 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=518587 00:04:54.392 01:12:45 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.392 01:12:45 -- event/cpu_locks.sh@81 -- # waitforlisten 518587 /var/tmp/spdk.sock 00:04:54.392 01:12:45 -- common/autotest_common.sh@819 -- # '[' -z 518587 ']' 00:04:54.392 01:12:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.392 01:12:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.392 01:12:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.392 01:12:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.392 01:12:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.392 [2024-07-27 01:12:46.005690] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:54.392 [2024-07-27 01:12:46.005763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518587 ] 00:04:54.392 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.392 [2024-07-27 01:12:46.067384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.650 [2024-07-27 01:12:46.182715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.650 [2024-07-27 01:12:46.182885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.217 01:12:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.217 01:12:46 -- common/autotest_common.sh@852 -- # return 0 00:04:55.217 01:12:46 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=518727 00:04:55.217 01:12:46 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:55.217 01:12:46 -- event/cpu_locks.sh@85 -- # waitforlisten 518727 /var/tmp/spdk2.sock 00:04:55.217 01:12:46 -- common/autotest_common.sh@819 -- # '[' -z 518727 ']' 00:04:55.217 01:12:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.217 01:12:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:55.217 01:12:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.217 01:12:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:55.217 01:12:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.476 [2024-07-27 01:12:47.015545] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:55.476 [2024-07-27 01:12:47.015634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518727 ] 00:04:55.476 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.476 [2024-07-27 01:12:47.099053] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.476 [2024-07-27 01:12:47.103101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.735 [2024-07-27 01:12:47.327627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.735 [2024-07-27 01:12:47.327799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.301 01:12:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:56.301 01:12:47 -- common/autotest_common.sh@852 -- # return 0 00:04:56.301 01:12:47 -- event/cpu_locks.sh@87 -- # locks_exist 518587 00:04:56.301 01:12:47 -- event/cpu_locks.sh@22 -- # lslocks -p 518587 00:04:56.301 01:12:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.867 lslocks: write error 00:04:56.867 01:12:48 -- event/cpu_locks.sh@89 -- # killprocess 518587 00:04:56.867 01:12:48 -- common/autotest_common.sh@926 -- # '[' -z 518587 ']' 00:04:56.867 01:12:48 -- common/autotest_common.sh@930 -- # kill -0 518587 00:04:56.867 01:12:48 -- common/autotest_common.sh@931 -- # uname 00:04:56.867 01:12:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:56.867 01:12:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 518587 00:04:56.867 01:12:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:56.867 01:12:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:56.867 01:12:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 518587' 00:04:56.867 killing process with pid 518587 00:04:56.867 01:12:48 -- common/autotest_common.sh@945 -- # kill 518587 00:04:56.867 01:12:48 -- common/autotest_common.sh@950 -- # wait 518587 00:04:57.802 01:12:49 -- event/cpu_locks.sh@90 -- # killprocess 518727 00:04:57.802 01:12:49 -- common/autotest_common.sh@926 -- # '[' -z 518727 ']' 00:04:57.802 01:12:49 -- common/autotest_common.sh@930 -- # kill -0 518727 00:04:57.802 01:12:49 -- common/autotest_common.sh@931 -- # uname 00:04:57.802 01:12:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:57.802 01:12:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 518727 00:04:57.802 01:12:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:57.802 01:12:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:57.802 01:12:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 518727' 00:04:57.802 killing process with pid 518727 00:04:57.802 01:12:49 -- common/autotest_common.sh@945 -- # kill 518727 00:04:57.802 01:12:49 -- common/autotest_common.sh@950 -- # wait 518727 00:04:58.060 00:04:58.060 real 0m3.814s 00:04:58.060 user 0m4.161s 00:04:58.060 sys 0m1.060s 00:04:58.060 01:12:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.060 01:12:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.060 ************************************ 00:04:58.060 END TEST non_locking_app_on_locked_coremask 00:04:58.060 ************************************ 00:04:58.060 01:12:49 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:58.060 01:12:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.060 01:12:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.060 01:12:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.060 ************************************ 00:04:58.060 START TEST locking_app_on_unlocked_coremask 00:04:58.060 ************************************ 00:04:58.060 01:12:49 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:04:58.060 01:12:49 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=519049 00:04:58.060 01:12:49 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:58.060 01:12:49 -- event/cpu_locks.sh@99 -- # waitforlisten 519049 /var/tmp/spdk.sock 00:04:58.060 01:12:49 -- common/autotest_common.sh@819 -- # '[' -z 519049 ']' 00:04:58.060 01:12:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.060 01:12:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.060 01:12:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.060 01:12:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.060 01:12:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.317 [2024-07-27 01:12:49.851646] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:58.317 [2024-07-27 01:12:49.851742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519049 ] 00:04:58.317 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.317 [2024-07-27 01:12:49.913004] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.317 [2024-07-27 01:12:49.913050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.317 [2024-07-27 01:12:50.026425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.317 [2024-07-27 01:12:50.026614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.250 01:12:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.250 01:12:50 -- common/autotest_common.sh@852 -- # return 0 00:04:59.250 01:12:50 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=519182 00:04:59.250 01:12:50 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:59.250 01:12:50 -- event/cpu_locks.sh@103 -- # waitforlisten 519182 /var/tmp/spdk2.sock 00:04:59.250 01:12:50 -- common/autotest_common.sh@819 -- # '[' -z 519182 ']' 00:04:59.250 01:12:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.250 01:12:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:59.250 01:12:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.250 01:12:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:59.250 01:12:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.250 [2024-07-27 01:12:50.827492] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:59.250 [2024-07-27 01:12:50.827577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519182 ] 00:04:59.250 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.250 [2024-07-27 01:12:50.924748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.509 [2024-07-27 01:12:51.156507] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.509 [2024-07-27 01:12:51.156706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.075 01:12:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.075 01:12:51 -- common/autotest_common.sh@852 -- # return 0 00:05:00.075 01:12:51 -- event/cpu_locks.sh@105 -- # locks_exist 519182 00:05:00.075 01:12:51 -- event/cpu_locks.sh@22 -- # lslocks -p 519182 00:05:00.075 01:12:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.640 lslocks: write error 00:05:00.640 01:12:52 -- event/cpu_locks.sh@107 -- # killprocess 519049 00:05:00.640 01:12:52 -- common/autotest_common.sh@926 -- # '[' -z 519049 ']' 00:05:00.641 01:12:52 -- common/autotest_common.sh@930 -- # kill -0 519049 00:05:00.641 01:12:52 -- common/autotest_common.sh@931 -- # uname 00:05:00.641 01:12:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:00.641 01:12:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 519049 00:05:00.641 01:12:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:00.641 01:12:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:00.641 01:12:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 519049' 00:05:00.641 killing process with pid 519049 00:05:00.641 01:12:52 -- common/autotest_common.sh@945 -- # kill 519049 00:05:00.641 01:12:52 -- common/autotest_common.sh@950 -- # wait 519049 00:05:01.575 01:12:53 -- event/cpu_locks.sh@108 -- # killprocess 519182 00:05:01.575 01:12:53 -- common/autotest_common.sh@926 -- # '[' -z 519182 ']' 00:05:01.575 01:12:53 -- common/autotest_common.sh@930 -- # kill -0 519182 00:05:01.575 01:12:53 -- common/autotest_common.sh@931 -- # uname 00:05:01.575 01:12:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:01.575 01:12:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 519182 00:05:01.575 01:12:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:01.575 01:12:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:01.575 01:12:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 519182' 00:05:01.575 killing process with pid 519182 00:05:01.575 01:12:53 -- common/autotest_common.sh@945 -- # kill 519182 00:05:01.575 01:12:53 -- common/autotest_common.sh@950 -- # wait 519182 00:05:02.141 00:05:02.141 real 0m3.787s 00:05:02.141 user 0m4.064s 00:05:02.141 sys 0m1.090s 00:05:02.141 01:12:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.141 01:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:02.141 ************************************ 00:05:02.141 END TEST locking_app_on_unlocked_coremask 00:05:02.141 ************************************ 00:05:02.141 01:12:53 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:02.141 01:12:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.141 01:12:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.141 01:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:02.141 ************************************ 00:05:02.141 START TEST locking_app_on_locked_coremask 00:05:02.141 ************************************ 00:05:02.141 01:12:53 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:02.141 01:12:53 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=519508 00:05:02.141 01:12:53 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.141 01:12:53 -- event/cpu_locks.sh@116 -- # waitforlisten 519508 /var/tmp/spdk.sock 00:05:02.141 01:12:53 -- common/autotest_common.sh@819 -- # '[' -z 519508 ']' 00:05:02.141 01:12:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.141 01:12:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:02.141 01:12:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.141 01:12:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:02.141 01:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:02.141 [2024-07-27 01:12:53.662296] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:02.141 [2024-07-27 01:12:53.662390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519508 ] 00:05:02.141 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.141 [2024-07-27 01:12:53.723168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.141 [2024-07-27 01:12:53.842033] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.141 [2024-07-27 01:12:53.842231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.073 01:12:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:03.073 01:12:54 -- common/autotest_common.sh@852 -- # return 0 00:05:03.073 01:12:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=519635 00:05:03.073 01:12:54 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:03.073 01:12:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 519635 /var/tmp/spdk2.sock 00:05:03.073 01:12:54 -- common/autotest_common.sh@640 -- # local es=0 00:05:03.073 01:12:54 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 519635 /var/tmp/spdk2.sock 00:05:03.073 01:12:54 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:03.073 01:12:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:03.073 01:12:54 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:03.073 01:12:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:03.073 01:12:54 -- common/autotest_common.sh@643 -- # waitforlisten 519635 /var/tmp/spdk2.sock 00:05:03.073 01:12:54 -- common/autotest_common.sh@819 -- # '[' -z 519635 ']' 00:05:03.073 01:12:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.073 01:12:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:03.073 01:12:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.073 01:12:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:03.073 01:12:54 -- common/autotest_common.sh@10 -- # set +x 00:05:03.073 [2024-07-27 01:12:54.710100] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:03.073 [2024-07-27 01:12:54.710185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519635 ] 00:05:03.073 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.073 [2024-07-27 01:12:54.807889] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 519508 has claimed it. 00:05:03.073 [2024-07-27 01:12:54.807950] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:04.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (519635) - No such process 00:05:04.004 ERROR: process (pid: 519635) is no longer running 00:05:04.004 01:12:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:04.004 01:12:55 -- common/autotest_common.sh@852 -- # return 1 00:05:04.004 01:12:55 -- common/autotest_common.sh@643 -- # es=1 00:05:04.004 01:12:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:04.004 01:12:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:04.005 01:12:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:04.005 01:12:55 -- event/cpu_locks.sh@122 -- # locks_exist 519508 00:05:04.005 01:12:55 -- event/cpu_locks.sh@22 -- # lslocks -p 519508 00:05:04.005 01:12:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.262 lslocks: write error 00:05:04.262 01:12:55 -- event/cpu_locks.sh@124 -- # killprocess 519508 00:05:04.262 01:12:55 -- common/autotest_common.sh@926 -- # '[' -z 519508 ']' 00:05:04.262 01:12:55 -- common/autotest_common.sh@930 -- # kill -0 519508 00:05:04.262 01:12:55 -- common/autotest_common.sh@931 -- # uname 00:05:04.263 01:12:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:04.263 01:12:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 519508 00:05:04.263 01:12:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:04.263 01:12:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:04.263 01:12:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 519508' 00:05:04.263 killing process with pid 519508 00:05:04.263 01:12:55 -- common/autotest_common.sh@945 -- # kill 519508 00:05:04.263 01:12:55 -- common/autotest_common.sh@950 -- # wait 519508 00:05:04.828 00:05:04.828 real 0m2.698s 00:05:04.828 user 0m3.089s 00:05:04.828 sys 0m0.705s 00:05:04.828 01:12:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.828 01:12:56 -- common/autotest_common.sh@10 -- # set +x 00:05:04.828 ************************************ 00:05:04.828 END TEST locking_app_on_locked_coremask 00:05:04.828 ************************************ 00:05:04.828 01:12:56 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:04.828 01:12:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.828 01:12:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.828 01:12:56 -- common/autotest_common.sh@10 -- # set +x 00:05:04.828 ************************************ 00:05:04.828 START TEST locking_overlapped_coremask 00:05:04.828 ************************************ 00:05:04.828 01:12:56 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:04.828 01:12:56 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=519934 00:05:04.828 01:12:56 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:04.828 01:12:56 -- event/cpu_locks.sh@133 -- # waitforlisten 519934 /var/tmp/spdk.sock 00:05:04.828 01:12:56 -- common/autotest_common.sh@819 -- # '[' -z 519934 ']' 00:05:04.828 01:12:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.828 01:12:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:04.828 01:12:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.828 01:12:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:04.828 01:12:56 -- common/autotest_common.sh@10 -- # set +x 00:05:04.828 [2024-07-27 01:12:56.383951] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:04.828 [2024-07-27 01:12:56.384042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519934 ] 00:05:04.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.828 [2024-07-27 01:12:56.440792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.828 [2024-07-27 01:12:56.550572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:04.828 [2024-07-27 01:12:56.550768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.828 [2024-07-27 01:12:56.550828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.828 [2024-07-27 01:12:56.550831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.761 01:12:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:05.761 01:12:57 -- common/autotest_common.sh@852 -- # return 0 00:05:05.761 01:12:57 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=520068 00:05:05.761 01:12:57 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 520068 /var/tmp/spdk2.sock 00:05:05.761 01:12:57 -- common/autotest_common.sh@640 -- # local es=0 00:05:05.761 01:12:57 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 520068 /var/tmp/spdk2.sock 00:05:05.761 01:12:57 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:05.761 01:12:57 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:05.761 01:12:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:05.761 01:12:57 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:05.761 01:12:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:05.761 01:12:57 -- common/autotest_common.sh@643 -- # waitforlisten 520068 /var/tmp/spdk2.sock 00:05:05.761 01:12:57 -- common/autotest_common.sh@819 -- # '[' -z 520068 ']' 00:05:05.761 01:12:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.761 01:12:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.761 01:12:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.761 01:12:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.761 01:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:05.761 [2024-07-27 01:12:57.373854] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:05.761 [2024-07-27 01:12:57.373929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520068 ] 00:05:05.761 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.761 [2024-07-27 01:12:57.462417] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 519934 has claimed it. 00:05:05.761 [2024-07-27 01:12:57.462465] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:06.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (520068) - No such process 00:05:06.324 ERROR: process (pid: 520068) is no longer running 00:05:06.325 01:12:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:06.325 01:12:58 -- common/autotest_common.sh@852 -- # return 1 00:05:06.325 01:12:58 -- common/autotest_common.sh@643 -- # es=1 00:05:06.325 01:12:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:06.325 01:12:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:06.325 01:12:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:06.325 01:12:58 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:06.325 01:12:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:06.325 01:12:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:06.325 01:12:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:06.325 01:12:58 -- event/cpu_locks.sh@141 -- # killprocess 519934 00:05:06.325 01:12:58 -- common/autotest_common.sh@926 -- # '[' -z 519934 ']' 00:05:06.325 01:12:58 -- common/autotest_common.sh@930 -- # kill -0 519934 00:05:06.325 01:12:58 -- common/autotest_common.sh@931 -- # uname 00:05:06.325 01:12:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:06.325 01:12:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 519934 00:05:06.582 01:12:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:06.582 01:12:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:06.582 01:12:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 519934' 00:05:06.582 killing process with pid 519934 00:05:06.582 01:12:58 -- common/autotest_common.sh@945 -- # kill 519934 00:05:06.582 01:12:58 -- common/autotest_common.sh@950 -- # wait 519934 00:05:06.840 00:05:06.840 real 0m2.214s 00:05:06.840 user 0m6.278s 00:05:06.840 sys 0m0.459s 00:05:06.840 01:12:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.840 01:12:58 -- common/autotest_common.sh@10 -- # set +x 00:05:06.840 ************************************ 00:05:06.840 END TEST locking_overlapped_coremask 00:05:06.840 ************************************ 00:05:06.840 01:12:58 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:06.840 01:12:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.840 01:12:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.840 01:12:58 -- common/autotest_common.sh@10 -- # set +x 00:05:06.840 ************************************ 00:05:06.840 START TEST locking_overlapped_coremask_via_rpc 00:05:06.840 ************************************ 00:05:06.840 01:12:58 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:06.840 01:12:58 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=520242 00:05:06.840 01:12:58 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:06.840 01:12:58 -- event/cpu_locks.sh@149 -- # waitforlisten 520242 /var/tmp/spdk.sock 00:05:06.840 01:12:58 -- common/autotest_common.sh@819 -- # '[' -z 520242 ']' 00:05:06.840 01:12:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.840 01:12:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:06.840 01:12:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.840 01:12:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:06.840 01:12:58 -- common/autotest_common.sh@10 -- # set +x 00:05:07.099 [2024-07-27 01:12:58.621813] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:07.099 [2024-07-27 01:12:58.621892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520242 ] 00:05:07.099 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.099 [2024-07-27 01:12:58.679433] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.099 [2024-07-27 01:12:58.679475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:07.099 [2024-07-27 01:12:58.789665] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:07.099 [2024-07-27 01:12:58.789890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.099 [2024-07-27 01:12:58.789957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.099 [2024-07-27 01:12:58.789960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.033 01:12:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:08.033 01:12:59 -- common/autotest_common.sh@852 -- # return 0 00:05:08.033 01:12:59 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=520376 00:05:08.033 01:12:59 -- event/cpu_locks.sh@153 -- # waitforlisten 520376 /var/tmp/spdk2.sock 00:05:08.033 01:12:59 -- common/autotest_common.sh@819 -- # '[' -z 520376 ']' 00:05:08.033 01:12:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.033 01:12:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.033 01:12:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.033 01:12:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.033 01:12:59 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:08.033 01:12:59 -- common/autotest_common.sh@10 -- # set +x 00:05:08.033 [2024-07-27 01:12:59.613516] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:08.033 [2024-07-27 01:12:59.613603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520376 ] 00:05:08.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.033 [2024-07-27 01:12:59.702573] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.033 [2024-07-27 01:12:59.702611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:08.291 [2024-07-27 01:12:59.918581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.291 [2024-07-27 01:12:59.918818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.291 [2024-07-27 01:12:59.922130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:08.291 [2024-07-27 01:12:59.922132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.857 01:13:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:08.857 01:13:00 -- common/autotest_common.sh@852 -- # return 0 00:05:08.857 01:13:00 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:08.858 01:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:08.858 01:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 01:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:08.858 01:13:00 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:08.858 01:13:00 -- common/autotest_common.sh@640 -- # local es=0 00:05:08.858 01:13:00 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:08.858 01:13:00 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:08.858 01:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.858 01:13:00 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:08.858 01:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.858 01:13:00 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:08.858 01:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:08.858 01:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 [2024-07-27 01:13:00.580160] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 520242 has claimed it. 00:05:08.858 request: 00:05:08.858 { 00:05:08.858 "method": "framework_enable_cpumask_locks", 00:05:08.858 "req_id": 1 00:05:08.858 } 00:05:08.858 Got JSON-RPC error response 00:05:08.858 response: 00:05:08.858 { 00:05:08.858 "code": -32603, 00:05:08.858 "message": "Failed to claim CPU core: 2" 00:05:08.858 } 00:05:08.858 01:13:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:08.858 01:13:00 -- common/autotest_common.sh@643 -- # es=1 00:05:08.858 01:13:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:08.858 01:13:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:08.858 01:13:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:08.858 01:13:00 -- event/cpu_locks.sh@158 -- # waitforlisten 520242 /var/tmp/spdk.sock 00:05:08.858 01:13:00 -- common/autotest_common.sh@819 -- # '[' -z 520242 ']' 00:05:08.858 01:13:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.858 01:13:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.858 01:13:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.858 01:13:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.858 01:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:09.146 01:13:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.146 01:13:00 -- common/autotest_common.sh@852 -- # return 0 00:05:09.146 01:13:00 -- event/cpu_locks.sh@159 -- # waitforlisten 520376 /var/tmp/spdk2.sock 00:05:09.146 01:13:00 -- common/autotest_common.sh@819 -- # '[' -z 520376 ']' 00:05:09.146 01:13:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.146 01:13:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.146 01:13:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.146 01:13:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.146 01:13:00 -- common/autotest_common.sh@10 -- # set +x 00:05:09.405 01:13:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.405 01:13:01 -- common/autotest_common.sh@852 -- # return 0 00:05:09.405 01:13:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:09.405 01:13:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:09.405 01:13:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:09.405 01:13:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:09.405 00:05:09.405 real 0m2.495s 00:05:09.405 user 0m1.191s 00:05:09.405 sys 0m0.237s 00:05:09.405 01:13:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.405 01:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:09.405 ************************************ 00:05:09.405 END TEST locking_overlapped_coremask_via_rpc 00:05:09.405 ************************************ 00:05:09.405 01:13:01 -- event/cpu_locks.sh@174 -- # cleanup 00:05:09.405 01:13:01 -- event/cpu_locks.sh@15 -- # [[ -z 520242 ]] 00:05:09.405 01:13:01 -- event/cpu_locks.sh@15 -- # killprocess 520242 00:05:09.405 01:13:01 -- common/autotest_common.sh@926 -- # '[' -z 520242 ']' 00:05:09.405 01:13:01 -- common/autotest_common.sh@930 -- # kill -0 520242 00:05:09.405 01:13:01 -- common/autotest_common.sh@931 -- # uname 00:05:09.405 01:13:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:09.405 01:13:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 520242 00:05:09.405 01:13:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:09.405 01:13:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:09.405 01:13:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 520242' 00:05:09.405 killing process with pid 520242 00:05:09.405 01:13:01 -- common/autotest_common.sh@945 -- # kill 520242 00:05:09.405 01:13:01 -- common/autotest_common.sh@950 -- # wait 520242 00:05:09.970 01:13:01 -- event/cpu_locks.sh@16 -- # [[ -z 520376 ]] 00:05:09.970 01:13:01 -- event/cpu_locks.sh@16 -- # killprocess 520376 00:05:09.970 01:13:01 -- common/autotest_common.sh@926 -- # '[' -z 520376 ']' 00:05:09.970 01:13:01 -- common/autotest_common.sh@930 -- # kill -0 520376 00:05:09.970 01:13:01 -- common/autotest_common.sh@931 -- # uname 00:05:09.970 01:13:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:09.970 01:13:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 520376 00:05:09.970 01:13:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:09.970 01:13:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:09.970 01:13:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 520376' 00:05:09.970 killing process with pid 520376 00:05:09.970 01:13:01 -- common/autotest_common.sh@945 -- # kill 520376 00:05:09.970 01:13:01 -- common/autotest_common.sh@950 -- # wait 520376 00:05:10.538 01:13:02 -- event/cpu_locks.sh@18 -- # rm -f 00:05:10.538 01:13:02 -- event/cpu_locks.sh@1 -- # cleanup 00:05:10.538 01:13:02 -- event/cpu_locks.sh@15 -- # [[ -z 520242 ]] 00:05:10.538 01:13:02 -- event/cpu_locks.sh@15 -- # killprocess 520242 00:05:10.538 01:13:02 -- common/autotest_common.sh@926 -- # '[' -z 520242 ']' 00:05:10.538 01:13:02 -- common/autotest_common.sh@930 -- # kill -0 520242 00:05:10.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (520242) - No such process 00:05:10.538 01:13:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 520242 is not found' 00:05:10.538 Process with pid 520242 is not found 00:05:10.538 01:13:02 -- event/cpu_locks.sh@16 -- # [[ -z 520376 ]] 00:05:10.538 01:13:02 -- event/cpu_locks.sh@16 -- # killprocess 520376 00:05:10.538 01:13:02 -- common/autotest_common.sh@926 -- # '[' -z 520376 ']' 00:05:10.538 01:13:02 -- common/autotest_common.sh@930 -- # kill -0 520376 00:05:10.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (520376) - No such process 00:05:10.538 01:13:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 520376 is not found' 00:05:10.538 Process with pid 520376 is not found 00:05:10.538 01:13:02 -- event/cpu_locks.sh@18 -- # rm -f 00:05:10.538 00:05:10.538 real 0m19.722s 00:05:10.538 user 0m35.022s 00:05:10.538 sys 0m5.498s 00:05:10.538 01:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.538 01:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:10.538 ************************************ 00:05:10.538 END TEST cpu_locks 00:05:10.538 ************************************ 00:05:10.538 00:05:10.538 real 0m45.216s 00:05:10.538 user 1m25.315s 00:05:10.538 sys 0m9.400s 00:05:10.538 01:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.538 01:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:10.538 ************************************ 00:05:10.538 END TEST event 00:05:10.538 ************************************ 00:05:10.538 01:13:02 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:10.538 01:13:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.538 01:13:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.538 01:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:10.538 ************************************ 00:05:10.538 START TEST thread 00:05:10.538 ************************************ 00:05:10.538 01:13:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:10.538 * Looking for test storage... 00:05:10.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:10.538 01:13:02 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:10.538 01:13:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:10.538 01:13:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.538 01:13:02 -- common/autotest_common.sh@10 -- # set +x 00:05:10.538 ************************************ 00:05:10.538 START TEST thread_poller_perf 00:05:10.538 ************************************ 00:05:10.538 01:13:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:10.538 [2024-07-27 01:13:02.167081] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:10.538 [2024-07-27 01:13:02.167174] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520760 ] 00:05:10.538 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.538 [2024-07-27 01:13:02.227298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.797 [2024-07-27 01:13:02.336727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.797 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:11.729 ====================================== 00:05:11.729 busy:2715258562 (cyc) 00:05:11.729 total_run_count: 282000 00:05:11.729 tsc_hz: 2700000000 (cyc) 00:05:11.729 ====================================== 00:05:11.729 poller_cost: 9628 (cyc), 3565 (nsec) 00:05:11.729 00:05:11.729 real 0m1.318s 00:05:11.729 user 0m1.227s 00:05:11.729 sys 0m0.085s 00:05:11.729 01:13:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.729 01:13:03 -- common/autotest_common.sh@10 -- # set +x 00:05:11.729 ************************************ 00:05:11.729 END TEST thread_poller_perf 00:05:11.729 ************************************ 00:05:11.986 01:13:03 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:11.986 01:13:03 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:11.986 01:13:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.986 01:13:03 -- common/autotest_common.sh@10 -- # set +x 00:05:11.986 ************************************ 00:05:11.986 START TEST thread_poller_perf 00:05:11.986 ************************************ 00:05:11.986 01:13:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:11.986 [2024-07-27 01:13:03.513696] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:11.986 [2024-07-27 01:13:03.513779] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520914 ] 00:05:11.986 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.986 [2024-07-27 01:13:03.578484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.986 [2024-07-27 01:13:03.692169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.986 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:13.359 ====================================== 00:05:13.359 busy:2703598540 (cyc) 00:05:13.359 total_run_count: 3842000 00:05:13.359 tsc_hz: 2700000000 (cyc) 00:05:13.359 ====================================== 00:05:13.359 poller_cost: 703 (cyc), 260 (nsec) 00:05:13.359 00:05:13.359 real 0m1.319s 00:05:13.359 user 0m1.234s 00:05:13.359 sys 0m0.078s 00:05:13.359 01:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.359 01:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:13.359 ************************************ 00:05:13.359 END TEST thread_poller_perf 00:05:13.359 ************************************ 00:05:13.359 01:13:04 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:13.359 00:05:13.359 real 0m2.741s 00:05:13.359 user 0m2.501s 00:05:13.359 sys 0m0.241s 00:05:13.359 01:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.359 01:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:13.359 ************************************ 00:05:13.359 END TEST thread 00:05:13.359 ************************************ 00:05:13.359 01:13:04 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:13.359 01:13:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.359 01:13:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.359 01:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:13.359 ************************************ 00:05:13.359 START TEST accel 00:05:13.359 ************************************ 00:05:13.359 01:13:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:13.359 * Looking for test storage... 00:05:13.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:13.359 01:13:04 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:13.359 01:13:04 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:13.359 01:13:04 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:13.359 01:13:04 -- accel/accel.sh@59 -- # spdk_tgt_pid=521128 00:05:13.359 01:13:04 -- accel/accel.sh@60 -- # waitforlisten 521128 00:05:13.359 01:13:04 -- common/autotest_common.sh@819 -- # '[' -z 521128 ']' 00:05:13.359 01:13:04 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:13.359 01:13:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.359 01:13:04 -- accel/accel.sh@58 -- # build_accel_config 00:05:13.359 01:13:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.359 01:13:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.359 01:13:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.359 01:13:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.359 01:13:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.359 01:13:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.359 01:13:04 -- common/autotest_common.sh@10 -- # set +x 00:05:13.359 01:13:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.359 01:13:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.359 01:13:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.359 01:13:04 -- accel/accel.sh@42 -- # jq -r . 00:05:13.359 [2024-07-27 01:13:04.959610] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:13.359 [2024-07-27 01:13:04.959712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521128 ] 00:05:13.359 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.359 [2024-07-27 01:13:05.018635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.618 [2024-07-27 01:13:05.129956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.618 [2024-07-27 01:13:05.130132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.184 01:13:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.184 01:13:05 -- common/autotest_common.sh@852 -- # return 0 00:05:14.184 01:13:05 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:14.184 01:13:05 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:14.184 01:13:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.184 01:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:14.184 01:13:05 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:14.184 01:13:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.184 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.184 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.184 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.184 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.184 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.184 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.184 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.184 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.184 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.184 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.184 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.184 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.184 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.184 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.442 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.442 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.442 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.443 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.443 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.443 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.443 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.443 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.443 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.443 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.443 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.443 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.443 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.443 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.443 01:13:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # IFS== 00:05:14.443 01:13:05 -- accel/accel.sh@64 -- # read -r opc module 00:05:14.443 01:13:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:14.443 01:13:05 -- accel/accel.sh@67 -- # killprocess 521128 00:05:14.443 01:13:05 -- common/autotest_common.sh@926 -- # '[' -z 521128 ']' 00:05:14.443 01:13:05 -- common/autotest_common.sh@930 -- # kill -0 521128 00:05:14.443 01:13:05 -- common/autotest_common.sh@931 -- # uname 00:05:14.443 01:13:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:14.443 01:13:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 521128 00:05:14.443 01:13:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:14.443 01:13:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:14.443 01:13:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 521128' 00:05:14.443 killing process with pid 521128 00:05:14.443 01:13:05 -- common/autotest_common.sh@945 -- # kill 521128 00:05:14.443 01:13:05 -- common/autotest_common.sh@950 -- # wait 521128 00:05:14.701 01:13:06 -- accel/accel.sh@68 -- # trap - ERR 00:05:14.701 01:13:06 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:14.701 01:13:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:14.701 01:13:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.701 01:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:14.701 01:13:06 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:14.701 01:13:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:14.701 01:13:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.701 01:13:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:14.701 01:13:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.701 01:13:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.701 01:13:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:14.701 01:13:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:14.701 01:13:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:14.701 01:13:06 -- accel/accel.sh@42 -- # jq -r . 00:05:14.701 01:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.701 01:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:14.960 01:13:06 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:14.960 01:13:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:14.960 01:13:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.960 01:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:14.960 ************************************ 00:05:14.960 START TEST accel_missing_filename 00:05:14.960 ************************************ 00:05:14.960 01:13:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:14.960 01:13:06 -- common/autotest_common.sh@640 -- # local es=0 00:05:14.960 01:13:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:14.960 01:13:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:14.960 01:13:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:14.960 01:13:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:14.960 01:13:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:14.960 01:13:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:14.960 01:13:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:14.960 01:13:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.960 01:13:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:14.960 01:13:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.960 01:13:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.960 01:13:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:14.960 01:13:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:14.960 01:13:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:14.960 01:13:06 -- accel/accel.sh@42 -- # jq -r . 00:05:14.960 [2024-07-27 01:13:06.494733] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:14.960 [2024-07-27 01:13:06.494808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521408 ] 00:05:14.960 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.960 [2024-07-27 01:13:06.555986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.960 [2024-07-27 01:13:06.672495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.219 [2024-07-27 01:13:06.734210] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:15.219 [2024-07-27 01:13:06.822909] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:15.219 A filename is required. 00:05:15.219 01:13:06 -- common/autotest_common.sh@643 -- # es=234 00:05:15.219 01:13:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:15.219 01:13:06 -- common/autotest_common.sh@652 -- # es=106 00:05:15.219 01:13:06 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:15.219 01:13:06 -- common/autotest_common.sh@660 -- # es=1 00:05:15.219 01:13:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:15.219 00:05:15.219 real 0m0.469s 00:05:15.219 user 0m0.355s 00:05:15.219 sys 0m0.148s 00:05:15.219 01:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.219 01:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:15.219 ************************************ 00:05:15.219 END TEST accel_missing_filename 00:05:15.219 ************************************ 00:05:15.219 01:13:06 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.219 01:13:06 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:15.219 01:13:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.219 01:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:15.219 ************************************ 00:05:15.219 START TEST accel_compress_verify 00:05:15.219 ************************************ 00:05:15.219 01:13:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.219 01:13:06 -- common/autotest_common.sh@640 -- # local es=0 00:05:15.219 01:13:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.219 01:13:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:15.219 01:13:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:15.219 01:13:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:15.219 01:13:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:15.219 01:13:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.478 01:13:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:15.478 01:13:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.478 01:13:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.478 01:13:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.478 01:13:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.478 01:13:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.478 01:13:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.478 01:13:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.478 01:13:06 -- accel/accel.sh@42 -- # jq -r . 00:05:15.478 [2024-07-27 01:13:06.994494] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:15.478 [2024-07-27 01:13:06.994581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521437 ] 00:05:15.478 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.478 [2024-07-27 01:13:07.058065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.478 [2024-07-27 01:13:07.174939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.478 [2024-07-27 01:13:07.233092] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:15.736 [2024-07-27 01:13:07.315513] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:15.736 00:05:15.736 Compression does not support the verify option, aborting. 00:05:15.736 01:13:07 -- common/autotest_common.sh@643 -- # es=161 00:05:15.736 01:13:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:15.736 01:13:07 -- common/autotest_common.sh@652 -- # es=33 00:05:15.736 01:13:07 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:15.736 01:13:07 -- common/autotest_common.sh@660 -- # es=1 00:05:15.736 01:13:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:15.736 00:05:15.736 real 0m0.465s 00:05:15.736 user 0m0.362s 00:05:15.736 sys 0m0.138s 00:05:15.736 01:13:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.736 01:13:07 -- common/autotest_common.sh@10 -- # set +x 00:05:15.736 ************************************ 00:05:15.736 END TEST accel_compress_verify 00:05:15.736 ************************************ 00:05:15.736 01:13:07 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:15.736 01:13:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:15.736 01:13:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.736 01:13:07 -- common/autotest_common.sh@10 -- # set +x 00:05:15.736 ************************************ 00:05:15.736 START TEST accel_wrong_workload 00:05:15.736 ************************************ 00:05:15.736 01:13:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:15.736 01:13:07 -- common/autotest_common.sh@640 -- # local es=0 00:05:15.736 01:13:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:15.736 01:13:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:15.736 01:13:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:15.736 01:13:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:15.736 01:13:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:15.736 01:13:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:15.736 01:13:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:15.737 01:13:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.737 01:13:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.737 01:13:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.737 01:13:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.737 01:13:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.737 01:13:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.737 01:13:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.737 01:13:07 -- accel/accel.sh@42 -- # jq -r . 00:05:15.737 Unsupported workload type: foobar 00:05:15.737 [2024-07-27 01:13:07.478737] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:15.737 accel_perf options: 00:05:15.737 [-h help message] 00:05:15.737 [-q queue depth per core] 00:05:15.737 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:15.737 [-T number of threads per core 00:05:15.737 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:15.737 [-t time in seconds] 00:05:15.737 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:15.737 [ dif_verify, , dif_generate, dif_generate_copy 00:05:15.737 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:15.737 [-l for compress/decompress workloads, name of uncompressed input file 00:05:15.737 [-S for crc32c workload, use this seed value (default 0) 00:05:15.737 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:15.737 [-f for fill workload, use this BYTE value (default 255) 00:05:15.737 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:15.737 [-y verify result if this switch is on] 00:05:15.737 [-a tasks to allocate per core (default: same value as -q)] 00:05:15.737 Can be used to spread operations across a wider range of memory. 00:05:15.737 01:13:07 -- common/autotest_common.sh@643 -- # es=1 00:05:15.737 01:13:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:15.737 01:13:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:15.737 01:13:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:15.737 00:05:15.737 real 0m0.022s 00:05:15.737 user 0m0.015s 00:05:15.737 sys 0m0.007s 00:05:15.737 01:13:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.737 01:13:07 -- common/autotest_common.sh@10 -- # set +x 00:05:15.737 ************************************ 00:05:15.737 END TEST accel_wrong_workload 00:05:15.737 ************************************ 00:05:15.996 Error: writing output failed: Broken pipe 00:05:15.996 01:13:07 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:15.996 01:13:07 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:15.996 01:13:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.996 01:13:07 -- common/autotest_common.sh@10 -- # set +x 00:05:15.996 ************************************ 00:05:15.996 START TEST accel_negative_buffers 00:05:15.996 ************************************ 00:05:15.996 01:13:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:15.996 01:13:07 -- common/autotest_common.sh@640 -- # local es=0 00:05:15.996 01:13:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:15.996 01:13:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:15.996 01:13:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:15.996 01:13:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:15.996 01:13:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:15.996 01:13:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:15.996 01:13:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:15.996 01:13:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.996 01:13:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.996 01:13:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.996 01:13:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.996 01:13:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.996 01:13:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.996 01:13:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.996 01:13:07 -- accel/accel.sh@42 -- # jq -r . 00:05:15.996 -x option must be non-negative. 00:05:15.996 [2024-07-27 01:13:07.529801] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:15.996 accel_perf options: 00:05:15.996 [-h help message] 00:05:15.996 [-q queue depth per core] 00:05:15.996 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:15.996 [-T number of threads per core 00:05:15.996 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:15.996 [-t time in seconds] 00:05:15.996 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:15.996 [ dif_verify, , dif_generate, dif_generate_copy 00:05:15.996 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:15.996 [-l for compress/decompress workloads, name of uncompressed input file 00:05:15.996 [-S for crc32c workload, use this seed value (default 0) 00:05:15.996 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:15.996 [-f for fill workload, use this BYTE value (default 255) 00:05:15.996 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:15.996 [-y verify result if this switch is on] 00:05:15.996 [-a tasks to allocate per core (default: same value as -q)] 00:05:15.996 Can be used to spread operations across a wider range of memory. 00:05:15.996 01:13:07 -- common/autotest_common.sh@643 -- # es=1 00:05:15.996 01:13:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:15.996 01:13:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:15.996 01:13:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:15.996 00:05:15.996 real 0m0.025s 00:05:15.996 user 0m0.013s 00:05:15.996 sys 0m0.011s 00:05:15.996 01:13:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.996 01:13:07 -- common/autotest_common.sh@10 -- # set +x 00:05:15.996 ************************************ 00:05:15.996 END TEST accel_negative_buffers 00:05:15.996 ************************************ 00:05:15.996 Error: writing output failed: Broken pipe 00:05:15.996 01:13:07 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:15.996 01:13:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:15.996 01:13:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.996 01:13:07 -- common/autotest_common.sh@10 -- # set +x 00:05:15.996 ************************************ 00:05:15.996 START TEST accel_crc32c 00:05:15.996 ************************************ 00:05:15.996 01:13:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:15.996 01:13:07 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.996 01:13:07 -- accel/accel.sh@17 -- # local accel_module 00:05:15.996 01:13:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:15.996 01:13:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:15.996 01:13:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.996 01:13:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.996 01:13:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.996 01:13:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.996 01:13:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.996 01:13:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.996 01:13:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.996 01:13:07 -- accel/accel.sh@42 -- # jq -r . 00:05:15.996 [2024-07-27 01:13:07.571613] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:15.996 [2024-07-27 01:13:07.571678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521612 ] 00:05:15.996 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.996 [2024-07-27 01:13:07.634359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.996 [2024-07-27 01:13:07.751240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.369 01:13:09 -- accel/accel.sh@18 -- # out=' 00:05:17.369 SPDK Configuration: 00:05:17.369 Core mask: 0x1 00:05:17.369 00:05:17.369 Accel Perf Configuration: 00:05:17.369 Workload Type: crc32c 00:05:17.369 CRC-32C seed: 32 00:05:17.369 Transfer size: 4096 bytes 00:05:17.369 Vector count 1 00:05:17.369 Module: software 00:05:17.369 Queue depth: 32 00:05:17.369 Allocate depth: 32 00:05:17.369 # threads/core: 1 00:05:17.369 Run time: 1 seconds 00:05:17.369 Verify: Yes 00:05:17.369 00:05:17.369 Running for 1 seconds... 00:05:17.369 00:05:17.369 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:17.369 ------------------------------------------------------------------------------------ 00:05:17.369 0,0 404416/s 1579 MiB/s 0 0 00:05:17.369 ==================================================================================== 00:05:17.369 Total 404416/s 1579 MiB/s 0 0' 00:05:17.369 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.370 01:13:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:17.370 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.370 01:13:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:17.370 01:13:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.370 01:13:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:17.370 01:13:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.370 01:13:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.370 01:13:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:17.370 01:13:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:17.370 01:13:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:17.370 01:13:09 -- accel/accel.sh@42 -- # jq -r . 00:05:17.370 [2024-07-27 01:13:09.036670] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:17.370 [2024-07-27 01:13:09.036759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521766 ] 00:05:17.370 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.370 [2024-07-27 01:13:09.099202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.628 [2024-07-27 01:13:09.214901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val= 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val= 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val=0x1 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val= 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val= 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val=crc32c 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val=32 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val= 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val=software 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@23 -- # accel_module=software 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val=32 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val=32 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val=1 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val=Yes 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val= 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:17.628 01:13:09 -- accel/accel.sh@21 -- # val= 00:05:17.628 01:13:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # IFS=: 00:05:17.628 01:13:09 -- accel/accel.sh@20 -- # read -r var val 00:05:19.001 01:13:10 -- accel/accel.sh@21 -- # val= 00:05:19.001 01:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # IFS=: 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # read -r var val 00:05:19.001 01:13:10 -- accel/accel.sh@21 -- # val= 00:05:19.001 01:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # IFS=: 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # read -r var val 00:05:19.001 01:13:10 -- accel/accel.sh@21 -- # val= 00:05:19.001 01:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # IFS=: 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # read -r var val 00:05:19.001 01:13:10 -- accel/accel.sh@21 -- # val= 00:05:19.001 01:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # IFS=: 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # read -r var val 00:05:19.001 01:13:10 -- accel/accel.sh@21 -- # val= 00:05:19.001 01:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # IFS=: 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # read -r var val 00:05:19.001 01:13:10 -- accel/accel.sh@21 -- # val= 00:05:19.001 01:13:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # IFS=: 00:05:19.001 01:13:10 -- accel/accel.sh@20 -- # read -r var val 00:05:19.001 01:13:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:19.001 01:13:10 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:19.001 01:13:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.001 00:05:19.001 real 0m2.940s 00:05:19.001 user 0m2.654s 00:05:19.001 sys 0m0.278s 00:05:19.001 01:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.001 01:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:19.001 ************************************ 00:05:19.001 END TEST accel_crc32c 00:05:19.001 ************************************ 00:05:19.001 01:13:10 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:19.001 01:13:10 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:19.001 01:13:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.001 01:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:19.001 ************************************ 00:05:19.001 START TEST accel_crc32c_C2 00:05:19.001 ************************************ 00:05:19.001 01:13:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:19.001 01:13:10 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.001 01:13:10 -- accel/accel.sh@17 -- # local accel_module 00:05:19.001 01:13:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:19.001 01:13:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:19.001 01:13:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.001 01:13:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.001 01:13:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.001 01:13:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.001 01:13:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.001 01:13:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.001 01:13:10 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.001 01:13:10 -- accel/accel.sh@42 -- # jq -r . 00:05:19.001 [2024-07-27 01:13:10.542833] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:19.001 [2024-07-27 01:13:10.542916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521921 ] 00:05:19.001 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.001 [2024-07-27 01:13:10.606971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.001 [2024-07-27 01:13:10.721563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.374 01:13:11 -- accel/accel.sh@18 -- # out=' 00:05:20.374 SPDK Configuration: 00:05:20.374 Core mask: 0x1 00:05:20.374 00:05:20.374 Accel Perf Configuration: 00:05:20.374 Workload Type: crc32c 00:05:20.374 CRC-32C seed: 0 00:05:20.375 Transfer size: 4096 bytes 00:05:20.375 Vector count 2 00:05:20.375 Module: software 00:05:20.375 Queue depth: 32 00:05:20.375 Allocate depth: 32 00:05:20.375 # threads/core: 1 00:05:20.375 Run time: 1 seconds 00:05:20.375 Verify: Yes 00:05:20.375 00:05:20.375 Running for 1 seconds... 00:05:20.375 00:05:20.375 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:20.375 ------------------------------------------------------------------------------------ 00:05:20.375 0,0 314144/s 2454 MiB/s 0 0 00:05:20.375 ==================================================================================== 00:05:20.375 Total 314144/s 1227 MiB/s 0 0' 00:05:20.375 01:13:11 -- accel/accel.sh@20 -- # IFS=: 00:05:20.375 01:13:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:20.375 01:13:11 -- accel/accel.sh@20 -- # read -r var val 00:05:20.375 01:13:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:20.375 01:13:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.375 01:13:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.375 01:13:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.375 01:13:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.375 01:13:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.375 01:13:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.375 01:13:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.375 01:13:11 -- accel/accel.sh@42 -- # jq -r . 00:05:20.375 [2024-07-27 01:13:12.016018] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:20.375 [2024-07-27 01:13:12.016132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522178 ] 00:05:20.375 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.375 [2024-07-27 01:13:12.076843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.633 [2024-07-27 01:13:12.192199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val= 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val= 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val=0x1 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val= 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val= 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val=crc32c 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val=0 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val= 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val=software 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@23 -- # accel_module=software 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val=32 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val=32 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val=1 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val=Yes 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val= 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:20.633 01:13:12 -- accel/accel.sh@21 -- # val= 00:05:20.633 01:13:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # IFS=: 00:05:20.633 01:13:12 -- accel/accel.sh@20 -- # read -r var val 00:05:22.005 01:13:13 -- accel/accel.sh@21 -- # val= 00:05:22.005 01:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:22.005 01:13:13 -- accel/accel.sh@21 -- # val= 00:05:22.005 01:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:22.005 01:13:13 -- accel/accel.sh@21 -- # val= 00:05:22.005 01:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:22.005 01:13:13 -- accel/accel.sh@21 -- # val= 00:05:22.005 01:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:22.005 01:13:13 -- accel/accel.sh@21 -- # val= 00:05:22.005 01:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:22.005 01:13:13 -- accel/accel.sh@21 -- # val= 00:05:22.005 01:13:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # IFS=: 00:05:22.005 01:13:13 -- accel/accel.sh@20 -- # read -r var val 00:05:22.005 01:13:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:22.005 01:13:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:22.005 01:13:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.005 00:05:22.005 real 0m2.928s 00:05:22.005 user 0m2.632s 00:05:22.005 sys 0m0.286s 00:05:22.005 01:13:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.005 01:13:13 -- common/autotest_common.sh@10 -- # set +x 00:05:22.005 ************************************ 00:05:22.005 END TEST accel_crc32c_C2 00:05:22.005 ************************************ 00:05:22.005 01:13:13 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:22.005 01:13:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:22.005 01:13:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.005 01:13:13 -- common/autotest_common.sh@10 -- # set +x 00:05:22.005 ************************************ 00:05:22.005 START TEST accel_copy 00:05:22.005 ************************************ 00:05:22.005 01:13:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:22.005 01:13:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.005 01:13:13 -- accel/accel.sh@17 -- # local accel_module 00:05:22.005 01:13:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:22.005 01:13:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:22.005 01:13:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.005 01:13:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.005 01:13:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.005 01:13:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.005 01:13:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.005 01:13:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.005 01:13:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.005 01:13:13 -- accel/accel.sh@42 -- # jq -r . 00:05:22.005 [2024-07-27 01:13:13.491017] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:22.005 [2024-07-27 01:13:13.491124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522344 ] 00:05:22.005 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.005 [2024-07-27 01:13:13.556045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.005 [2024-07-27 01:13:13.671901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.376 01:13:14 -- accel/accel.sh@18 -- # out=' 00:05:23.376 SPDK Configuration: 00:05:23.376 Core mask: 0x1 00:05:23.376 00:05:23.376 Accel Perf Configuration: 00:05:23.376 Workload Type: copy 00:05:23.376 Transfer size: 4096 bytes 00:05:23.376 Vector count 1 00:05:23.376 Module: software 00:05:23.376 Queue depth: 32 00:05:23.376 Allocate depth: 32 00:05:23.376 # threads/core: 1 00:05:23.376 Run time: 1 seconds 00:05:23.376 Verify: Yes 00:05:23.376 00:05:23.376 Running for 1 seconds... 00:05:23.376 00:05:23.376 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:23.376 ------------------------------------------------------------------------------------ 00:05:23.376 0,0 279008/s 1089 MiB/s 0 0 00:05:23.376 ==================================================================================== 00:05:23.376 Total 279008/s 1089 MiB/s 0 0' 00:05:23.376 01:13:14 -- accel/accel.sh@20 -- # IFS=: 00:05:23.376 01:13:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:23.376 01:13:14 -- accel/accel.sh@20 -- # read -r var val 00:05:23.376 01:13:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:23.376 01:13:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.376 01:13:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.376 01:13:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.376 01:13:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.376 01:13:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.376 01:13:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.376 01:13:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.376 01:13:14 -- accel/accel.sh@42 -- # jq -r . 00:05:23.376 [2024-07-27 01:13:14.968681] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:23.376 [2024-07-27 01:13:14.968769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522487 ] 00:05:23.376 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.376 [2024-07-27 01:13:15.028923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.634 [2024-07-27 01:13:15.144121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val= 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val= 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val=0x1 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val= 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val= 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val=copy 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val= 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val=software 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@23 -- # accel_module=software 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val=32 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val=32 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val=1 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val=Yes 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val= 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:23.634 01:13:15 -- accel/accel.sh@21 -- # val= 00:05:23.634 01:13:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # IFS=: 00:05:23.634 01:13:15 -- accel/accel.sh@20 -- # read -r var val 00:05:25.005 01:13:16 -- accel/accel.sh@21 -- # val= 00:05:25.005 01:13:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # IFS=: 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # read -r var val 00:05:25.005 01:13:16 -- accel/accel.sh@21 -- # val= 00:05:25.005 01:13:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # IFS=: 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # read -r var val 00:05:25.005 01:13:16 -- accel/accel.sh@21 -- # val= 00:05:25.005 01:13:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # IFS=: 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # read -r var val 00:05:25.005 01:13:16 -- accel/accel.sh@21 -- # val= 00:05:25.005 01:13:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # IFS=: 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # read -r var val 00:05:25.005 01:13:16 -- accel/accel.sh@21 -- # val= 00:05:25.005 01:13:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # IFS=: 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # read -r var val 00:05:25.005 01:13:16 -- accel/accel.sh@21 -- # val= 00:05:25.005 01:13:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # IFS=: 00:05:25.005 01:13:16 -- accel/accel.sh@20 -- # read -r var val 00:05:25.005 01:13:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:25.005 01:13:16 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:25.005 01:13:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.005 00:05:25.005 real 0m2.941s 00:05:25.005 user 0m2.641s 00:05:25.005 sys 0m0.290s 00:05:25.005 01:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.005 01:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:25.005 ************************************ 00:05:25.005 END TEST accel_copy 00:05:25.005 ************************************ 00:05:25.005 01:13:16 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:25.005 01:13:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:25.005 01:13:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.005 01:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:25.005 ************************************ 00:05:25.005 START TEST accel_fill 00:05:25.005 ************************************ 00:05:25.005 01:13:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:25.005 01:13:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.005 01:13:16 -- accel/accel.sh@17 -- # local accel_module 00:05:25.005 01:13:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:25.005 01:13:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:25.005 01:13:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.005 01:13:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.005 01:13:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.005 01:13:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.005 01:13:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.005 01:13:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.005 01:13:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.005 01:13:16 -- accel/accel.sh@42 -- # jq -r . 00:05:25.005 [2024-07-27 01:13:16.454324] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:25.005 [2024-07-27 01:13:16.454424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522765 ] 00:05:25.005 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.005 [2024-07-27 01:13:16.513943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.005 [2024-07-27 01:13:16.630098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.418 01:13:17 -- accel/accel.sh@18 -- # out=' 00:05:26.418 SPDK Configuration: 00:05:26.418 Core mask: 0x1 00:05:26.418 00:05:26.418 Accel Perf Configuration: 00:05:26.418 Workload Type: fill 00:05:26.418 Fill pattern: 0x80 00:05:26.418 Transfer size: 4096 bytes 00:05:26.418 Vector count 1 00:05:26.418 Module: software 00:05:26.418 Queue depth: 64 00:05:26.418 Allocate depth: 64 00:05:26.418 # threads/core: 1 00:05:26.418 Run time: 1 seconds 00:05:26.418 Verify: Yes 00:05:26.418 00:05:26.418 Running for 1 seconds... 00:05:26.418 00:05:26.418 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:26.418 ------------------------------------------------------------------------------------ 00:05:26.418 0,0 404928/s 1581 MiB/s 0 0 00:05:26.418 ==================================================================================== 00:05:26.418 Total 404928/s 1581 MiB/s 0 0' 00:05:26.418 01:13:17 -- accel/accel.sh@20 -- # IFS=: 00:05:26.418 01:13:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:26.418 01:13:17 -- accel/accel.sh@20 -- # read -r var val 00:05:26.418 01:13:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:26.418 01:13:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.418 01:13:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.418 01:13:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.418 01:13:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.419 01:13:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.419 01:13:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.419 01:13:17 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.419 01:13:17 -- accel/accel.sh@42 -- # jq -r . 00:05:26.419 [2024-07-27 01:13:17.914794] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:26.419 [2024-07-27 01:13:17.914882] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522912 ] 00:05:26.419 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.419 [2024-07-27 01:13:17.976961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.419 [2024-07-27 01:13:18.091750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val= 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val= 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val=0x1 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val= 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val= 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val=fill 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val=0x80 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val= 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val=software 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@23 -- # accel_module=software 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val=64 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val=64 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val=1 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val=Yes 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val= 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:26.419 01:13:18 -- accel/accel.sh@21 -- # val= 00:05:26.419 01:13:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # IFS=: 00:05:26.419 01:13:18 -- accel/accel.sh@20 -- # read -r var val 00:05:27.790 01:13:19 -- accel/accel.sh@21 -- # val= 00:05:27.791 01:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # IFS=: 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # read -r var val 00:05:27.791 01:13:19 -- accel/accel.sh@21 -- # val= 00:05:27.791 01:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # IFS=: 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # read -r var val 00:05:27.791 01:13:19 -- accel/accel.sh@21 -- # val= 00:05:27.791 01:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # IFS=: 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # read -r var val 00:05:27.791 01:13:19 -- accel/accel.sh@21 -- # val= 00:05:27.791 01:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # IFS=: 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # read -r var val 00:05:27.791 01:13:19 -- accel/accel.sh@21 -- # val= 00:05:27.791 01:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # IFS=: 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # read -r var val 00:05:27.791 01:13:19 -- accel/accel.sh@21 -- # val= 00:05:27.791 01:13:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # IFS=: 00:05:27.791 01:13:19 -- accel/accel.sh@20 -- # read -r var val 00:05:27.791 01:13:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:27.791 01:13:19 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:27.791 01:13:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.791 00:05:27.791 real 0m2.923s 00:05:27.791 user 0m2.649s 00:05:27.791 sys 0m0.264s 00:05:27.791 01:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.791 01:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.791 ************************************ 00:05:27.791 END TEST accel_fill 00:05:27.791 ************************************ 00:05:27.791 01:13:19 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:27.791 01:13:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:27.791 01:13:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.791 01:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.791 ************************************ 00:05:27.791 START TEST accel_copy_crc32c 00:05:27.791 ************************************ 00:05:27.791 01:13:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:27.791 01:13:19 -- accel/accel.sh@16 -- # local accel_opc 00:05:27.791 01:13:19 -- accel/accel.sh@17 -- # local accel_module 00:05:27.791 01:13:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:27.791 01:13:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:27.791 01:13:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.791 01:13:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.791 01:13:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.791 01:13:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.791 01:13:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.791 01:13:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.791 01:13:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.791 01:13:19 -- accel/accel.sh@42 -- # jq -r . 00:05:27.791 [2024-07-27 01:13:19.401914] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:27.791 [2024-07-27 01:13:19.402002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523076 ] 00:05:27.791 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.791 [2024-07-27 01:13:19.463097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.049 [2024-07-27 01:13:19.579724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.424 01:13:20 -- accel/accel.sh@18 -- # out=' 00:05:29.424 SPDK Configuration: 00:05:29.424 Core mask: 0x1 00:05:29.424 00:05:29.424 Accel Perf Configuration: 00:05:29.424 Workload Type: copy_crc32c 00:05:29.424 CRC-32C seed: 0 00:05:29.424 Vector size: 4096 bytes 00:05:29.424 Transfer size: 4096 bytes 00:05:29.424 Vector count 1 00:05:29.424 Module: software 00:05:29.424 Queue depth: 32 00:05:29.424 Allocate depth: 32 00:05:29.424 # threads/core: 1 00:05:29.424 Run time: 1 seconds 00:05:29.424 Verify: Yes 00:05:29.424 00:05:29.424 Running for 1 seconds... 00:05:29.424 00:05:29.424 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:29.424 ------------------------------------------------------------------------------------ 00:05:29.424 0,0 218688/s 854 MiB/s 0 0 00:05:29.424 ==================================================================================== 00:05:29.424 Total 218688/s 854 MiB/s 0 0' 00:05:29.424 01:13:20 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:20 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:29.424 01:13:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:29.424 01:13:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.424 01:13:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.424 01:13:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.424 01:13:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.424 01:13:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.424 01:13:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.424 01:13:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.424 01:13:20 -- accel/accel.sh@42 -- # jq -r . 00:05:29.424 [2024-07-27 01:13:20.865403] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:29.424 [2024-07-27 01:13:20.865500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523324 ] 00:05:29.424 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.424 [2024-07-27 01:13:20.928862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.424 [2024-07-27 01:13:21.044796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val= 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val= 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val=0x1 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val= 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val= 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val=0 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val= 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val=software 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@23 -- # accel_module=software 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val=32 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val=32 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val=1 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val=Yes 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val= 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:29.424 01:13:21 -- accel/accel.sh@21 -- # val= 00:05:29.424 01:13:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # IFS=: 00:05:29.424 01:13:21 -- accel/accel.sh@20 -- # read -r var val 00:05:30.796 01:13:22 -- accel/accel.sh@21 -- # val= 00:05:30.796 01:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.796 01:13:22 -- accel/accel.sh@20 -- # IFS=: 00:05:30.796 01:13:22 -- accel/accel.sh@20 -- # read -r var val 00:05:30.796 01:13:22 -- accel/accel.sh@21 -- # val= 00:05:30.796 01:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.796 01:13:22 -- accel/accel.sh@20 -- # IFS=: 00:05:30.796 01:13:22 -- accel/accel.sh@20 -- # read -r var val 00:05:30.796 01:13:22 -- accel/accel.sh@21 -- # val= 00:05:30.796 01:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.796 01:13:22 -- accel/accel.sh@20 -- # IFS=: 00:05:30.796 01:13:22 -- accel/accel.sh@20 -- # read -r var val 00:05:30.796 01:13:22 -- accel/accel.sh@21 -- # val= 00:05:30.796 01:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.796 01:13:22 -- accel/accel.sh@20 -- # IFS=: 00:05:30.796 01:13:22 -- accel/accel.sh@20 -- # read -r var val 00:05:30.797 01:13:22 -- accel/accel.sh@21 -- # val= 00:05:30.797 01:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.797 01:13:22 -- accel/accel.sh@20 -- # IFS=: 00:05:30.797 01:13:22 -- accel/accel.sh@20 -- # read -r var val 00:05:30.797 01:13:22 -- accel/accel.sh@21 -- # val= 00:05:30.797 01:13:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.797 01:13:22 -- accel/accel.sh@20 -- # IFS=: 00:05:30.797 01:13:22 -- accel/accel.sh@20 -- # read -r var val 00:05:30.797 01:13:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:30.797 01:13:22 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:30.797 01:13:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.797 00:05:30.797 real 0m2.940s 00:05:30.797 user 0m2.648s 00:05:30.797 sys 0m0.283s 00:05:30.797 01:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.797 01:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.797 ************************************ 00:05:30.797 END TEST accel_copy_crc32c 00:05:30.797 ************************************ 00:05:30.797 01:13:22 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:30.797 01:13:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:30.797 01:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.797 01:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.797 ************************************ 00:05:30.797 START TEST accel_copy_crc32c_C2 00:05:30.797 ************************************ 00:05:30.797 01:13:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:30.797 01:13:22 -- accel/accel.sh@16 -- # local accel_opc 00:05:30.797 01:13:22 -- accel/accel.sh@17 -- # local accel_module 00:05:30.797 01:13:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:30.797 01:13:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:30.797 01:13:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.797 01:13:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.797 01:13:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.797 01:13:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.797 01:13:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.797 01:13:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.797 01:13:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.797 01:13:22 -- accel/accel.sh@42 -- # jq -r . 00:05:30.797 [2024-07-27 01:13:22.364490] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:30.797 [2024-07-27 01:13:22.364571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523494 ] 00:05:30.797 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.797 [2024-07-27 01:13:22.425030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.797 [2024-07-27 01:13:22.538931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.166 01:13:23 -- accel/accel.sh@18 -- # out=' 00:05:32.166 SPDK Configuration: 00:05:32.166 Core mask: 0x1 00:05:32.166 00:05:32.166 Accel Perf Configuration: 00:05:32.166 Workload Type: copy_crc32c 00:05:32.166 CRC-32C seed: 0 00:05:32.166 Vector size: 4096 bytes 00:05:32.166 Transfer size: 8192 bytes 00:05:32.166 Vector count 2 00:05:32.166 Module: software 00:05:32.166 Queue depth: 32 00:05:32.166 Allocate depth: 32 00:05:32.166 # threads/core: 1 00:05:32.166 Run time: 1 seconds 00:05:32.166 Verify: Yes 00:05:32.166 00:05:32.166 Running for 1 seconds... 00:05:32.166 00:05:32.166 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:32.166 ------------------------------------------------------------------------------------ 00:05:32.166 0,0 154752/s 1209 MiB/s 0 0 00:05:32.166 ==================================================================================== 00:05:32.166 Total 154752/s 604 MiB/s 0 0' 00:05:32.166 01:13:23 -- accel/accel.sh@20 -- # IFS=: 00:05:32.166 01:13:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:32.166 01:13:23 -- accel/accel.sh@20 -- # read -r var val 00:05:32.166 01:13:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:32.166 01:13:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.166 01:13:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.166 01:13:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.166 01:13:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.166 01:13:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.166 01:13:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.166 01:13:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.166 01:13:23 -- accel/accel.sh@42 -- # jq -r . 00:05:32.166 [2024-07-27 01:13:23.831721] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:32.166 [2024-07-27 01:13:23.831805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523643 ] 00:05:32.166 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.166 [2024-07-27 01:13:23.893408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.424 [2024-07-27 01:13:24.009442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val= 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val= 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val=0x1 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val= 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val= 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val=0 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val= 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val=software 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@23 -- # accel_module=software 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val=32 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val=32 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val=1 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val=Yes 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val= 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:32.424 01:13:24 -- accel/accel.sh@21 -- # val= 00:05:32.424 01:13:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.424 01:13:24 -- accel/accel.sh@20 -- # IFS=: 00:05:32.425 01:13:24 -- accel/accel.sh@20 -- # read -r var val 00:05:33.796 01:13:25 -- accel/accel.sh@21 -- # val= 00:05:33.796 01:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # IFS=: 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # read -r var val 00:05:33.796 01:13:25 -- accel/accel.sh@21 -- # val= 00:05:33.796 01:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # IFS=: 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # read -r var val 00:05:33.796 01:13:25 -- accel/accel.sh@21 -- # val= 00:05:33.796 01:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # IFS=: 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # read -r var val 00:05:33.796 01:13:25 -- accel/accel.sh@21 -- # val= 00:05:33.796 01:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # IFS=: 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # read -r var val 00:05:33.796 01:13:25 -- accel/accel.sh@21 -- # val= 00:05:33.796 01:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # IFS=: 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # read -r var val 00:05:33.796 01:13:25 -- accel/accel.sh@21 -- # val= 00:05:33.796 01:13:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # IFS=: 00:05:33.796 01:13:25 -- accel/accel.sh@20 -- # read -r var val 00:05:33.796 01:13:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:33.796 01:13:25 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:33.796 01:13:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.796 00:05:33.796 real 0m2.947s 00:05:33.796 user 0m2.646s 00:05:33.796 sys 0m0.291s 00:05:33.796 01:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.796 01:13:25 -- common/autotest_common.sh@10 -- # set +x 00:05:33.796 ************************************ 00:05:33.796 END TEST accel_copy_crc32c_C2 00:05:33.796 ************************************ 00:05:33.796 01:13:25 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:33.796 01:13:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:33.796 01:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.796 01:13:25 -- common/autotest_common.sh@10 -- # set +x 00:05:33.796 ************************************ 00:05:33.796 START TEST accel_dualcast 00:05:33.796 ************************************ 00:05:33.796 01:13:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:33.796 01:13:25 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.796 01:13:25 -- accel/accel.sh@17 -- # local accel_module 00:05:33.796 01:13:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:33.796 01:13:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:33.796 01:13:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.796 01:13:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.796 01:13:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.796 01:13:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.796 01:13:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.796 01:13:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.796 01:13:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.796 01:13:25 -- accel/accel.sh@42 -- # jq -r . 00:05:33.796 [2024-07-27 01:13:25.334580] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:33.796 [2024-07-27 01:13:25.334665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523851 ] 00:05:33.796 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.796 [2024-07-27 01:13:25.396913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.796 [2024-07-27 01:13:25.515929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.186 01:13:26 -- accel/accel.sh@18 -- # out=' 00:05:35.186 SPDK Configuration: 00:05:35.186 Core mask: 0x1 00:05:35.186 00:05:35.186 Accel Perf Configuration: 00:05:35.186 Workload Type: dualcast 00:05:35.186 Transfer size: 4096 bytes 00:05:35.186 Vector count 1 00:05:35.186 Module: software 00:05:35.186 Queue depth: 32 00:05:35.186 Allocate depth: 32 00:05:35.186 # threads/core: 1 00:05:35.186 Run time: 1 seconds 00:05:35.186 Verify: Yes 00:05:35.186 00:05:35.186 Running for 1 seconds... 00:05:35.186 00:05:35.186 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:35.186 ------------------------------------------------------------------------------------ 00:05:35.186 0,0 298752/s 1167 MiB/s 0 0 00:05:35.186 ==================================================================================== 00:05:35.186 Total 298752/s 1167 MiB/s 0 0' 00:05:35.186 01:13:26 -- accel/accel.sh@20 -- # IFS=: 00:05:35.186 01:13:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:35.186 01:13:26 -- accel/accel.sh@20 -- # read -r var val 00:05:35.186 01:13:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:35.186 01:13:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.186 01:13:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.186 01:13:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.186 01:13:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.186 01:13:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.186 01:13:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.186 01:13:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.186 01:13:26 -- accel/accel.sh@42 -- # jq -r . 00:05:35.186 [2024-07-27 01:13:26.817022] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:35.186 [2024-07-27 01:13:26.817120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524062 ] 00:05:35.186 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.186 [2024-07-27 01:13:26.882161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.444 [2024-07-27 01:13:27.001990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.444 01:13:27 -- accel/accel.sh@21 -- # val= 00:05:35.444 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.444 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.444 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.444 01:13:27 -- accel/accel.sh@21 -- # val= 00:05:35.444 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.444 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.444 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.444 01:13:27 -- accel/accel.sh@21 -- # val=0x1 00:05:35.444 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.444 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.444 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.444 01:13:27 -- accel/accel.sh@21 -- # val= 00:05:35.444 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.444 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.444 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.444 01:13:27 -- accel/accel.sh@21 -- # val= 00:05:35.444 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val=dualcast 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val= 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val=software 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@23 -- # accel_module=software 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val=32 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val=32 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val=1 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val=Yes 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val= 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:35.445 01:13:27 -- accel/accel.sh@21 -- # val= 00:05:35.445 01:13:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # IFS=: 00:05:35.445 01:13:27 -- accel/accel.sh@20 -- # read -r var val 00:05:36.819 01:13:28 -- accel/accel.sh@21 -- # val= 00:05:36.819 01:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # IFS=: 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # read -r var val 00:05:36.819 01:13:28 -- accel/accel.sh@21 -- # val= 00:05:36.819 01:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # IFS=: 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # read -r var val 00:05:36.819 01:13:28 -- accel/accel.sh@21 -- # val= 00:05:36.819 01:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # IFS=: 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # read -r var val 00:05:36.819 01:13:28 -- accel/accel.sh@21 -- # val= 00:05:36.819 01:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # IFS=: 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # read -r var val 00:05:36.819 01:13:28 -- accel/accel.sh@21 -- # val= 00:05:36.819 01:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # IFS=: 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # read -r var val 00:05:36.819 01:13:28 -- accel/accel.sh@21 -- # val= 00:05:36.819 01:13:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # IFS=: 00:05:36.819 01:13:28 -- accel/accel.sh@20 -- # read -r var val 00:05:36.819 01:13:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:36.819 01:13:28 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:36.819 01:13:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.819 00:05:36.819 real 0m2.955s 00:05:36.819 user 0m2.664s 00:05:36.819 sys 0m0.283s 00:05:36.819 01:13:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.819 01:13:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.819 ************************************ 00:05:36.819 END TEST accel_dualcast 00:05:36.819 ************************************ 00:05:36.819 01:13:28 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:36.819 01:13:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:36.819 01:13:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.819 01:13:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.819 ************************************ 00:05:36.819 START TEST accel_compare 00:05:36.819 ************************************ 00:05:36.819 01:13:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:05:36.819 01:13:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.819 01:13:28 -- accel/accel.sh@17 -- # local accel_module 00:05:36.819 01:13:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:36.819 01:13:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:36.819 01:13:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.819 01:13:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.819 01:13:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.819 01:13:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.819 01:13:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.819 01:13:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.819 01:13:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.819 01:13:28 -- accel/accel.sh@42 -- # jq -r . 00:05:36.819 [2024-07-27 01:13:28.314669] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:36.819 [2024-07-27 01:13:28.314757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524226 ] 00:05:36.819 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.819 [2024-07-27 01:13:28.376056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.819 [2024-07-27 01:13:28.493513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.192 01:13:29 -- accel/accel.sh@18 -- # out=' 00:05:38.192 SPDK Configuration: 00:05:38.192 Core mask: 0x1 00:05:38.192 00:05:38.192 Accel Perf Configuration: 00:05:38.192 Workload Type: compare 00:05:38.192 Transfer size: 4096 bytes 00:05:38.192 Vector count 1 00:05:38.192 Module: software 00:05:38.192 Queue depth: 32 00:05:38.192 Allocate depth: 32 00:05:38.192 # threads/core: 1 00:05:38.192 Run time: 1 seconds 00:05:38.192 Verify: Yes 00:05:38.192 00:05:38.192 Running for 1 seconds... 00:05:38.192 00:05:38.192 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:38.192 ------------------------------------------------------------------------------------ 00:05:38.192 0,0 399072/s 1558 MiB/s 0 0 00:05:38.193 ==================================================================================== 00:05:38.193 Total 399072/s 1558 MiB/s 0 0' 00:05:38.193 01:13:29 -- accel/accel.sh@20 -- # IFS=: 00:05:38.193 01:13:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:38.193 01:13:29 -- accel/accel.sh@20 -- # read -r var val 00:05:38.193 01:13:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:38.193 01:13:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.193 01:13:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.193 01:13:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.193 01:13:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.193 01:13:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.193 01:13:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.193 01:13:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.193 01:13:29 -- accel/accel.sh@42 -- # jq -r . 00:05:38.193 [2024-07-27 01:13:29.789283] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:38.193 [2024-07-27 01:13:29.789374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524371 ] 00:05:38.193 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.193 [2024-07-27 01:13:29.851421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.451 [2024-07-27 01:13:29.969474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val= 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val= 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val=0x1 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val= 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val= 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val=compare 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val= 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val=software 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@23 -- # accel_module=software 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val=32 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val=32 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val=1 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val=Yes 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val= 00:05:38.451 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.451 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:38.451 01:13:30 -- accel/accel.sh@21 -- # val= 00:05:38.452 01:13:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.452 01:13:30 -- accel/accel.sh@20 -- # IFS=: 00:05:38.452 01:13:30 -- accel/accel.sh@20 -- # read -r var val 00:05:39.826 01:13:31 -- accel/accel.sh@21 -- # val= 00:05:39.826 01:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.826 01:13:31 -- accel/accel.sh@20 -- # IFS=: 00:05:39.826 01:13:31 -- accel/accel.sh@20 -- # read -r var val 00:05:39.826 01:13:31 -- accel/accel.sh@21 -- # val= 00:05:39.826 01:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.826 01:13:31 -- accel/accel.sh@20 -- # IFS=: 00:05:39.827 01:13:31 -- accel/accel.sh@20 -- # read -r var val 00:05:39.827 01:13:31 -- accel/accel.sh@21 -- # val= 00:05:39.827 01:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.827 01:13:31 -- accel/accel.sh@20 -- # IFS=: 00:05:39.827 01:13:31 -- accel/accel.sh@20 -- # read -r var val 00:05:39.827 01:13:31 -- accel/accel.sh@21 -- # val= 00:05:39.827 01:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.827 01:13:31 -- accel/accel.sh@20 -- # IFS=: 00:05:39.827 01:13:31 -- accel/accel.sh@20 -- # read -r var val 00:05:39.827 01:13:31 -- accel/accel.sh@21 -- # val= 00:05:39.827 01:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.827 01:13:31 -- accel/accel.sh@20 -- # IFS=: 00:05:39.827 01:13:31 -- accel/accel.sh@20 -- # read -r var val 00:05:39.827 01:13:31 -- accel/accel.sh@21 -- # val= 00:05:39.827 01:13:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.827 01:13:31 -- accel/accel.sh@20 -- # IFS=: 00:05:39.827 01:13:31 -- accel/accel.sh@20 -- # read -r var val 00:05:39.827 01:13:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:39.827 01:13:31 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:39.827 01:13:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.827 00:05:39.827 real 0m2.956s 00:05:39.827 user 0m2.652s 00:05:39.827 sys 0m0.295s 00:05:39.827 01:13:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.827 01:13:31 -- common/autotest_common.sh@10 -- # set +x 00:05:39.827 ************************************ 00:05:39.827 END TEST accel_compare 00:05:39.827 ************************************ 00:05:39.827 01:13:31 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:39.827 01:13:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:39.827 01:13:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.827 01:13:31 -- common/autotest_common.sh@10 -- # set +x 00:05:39.827 ************************************ 00:05:39.827 START TEST accel_xor 00:05:39.827 ************************************ 00:05:39.827 01:13:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:05:39.827 01:13:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.827 01:13:31 -- accel/accel.sh@17 -- # local accel_module 00:05:39.827 01:13:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:39.827 01:13:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:39.827 01:13:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.827 01:13:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.827 01:13:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.827 01:13:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.827 01:13:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.827 01:13:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.827 01:13:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.827 01:13:31 -- accel/accel.sh@42 -- # jq -r . 00:05:39.827 [2024-07-27 01:13:31.297188] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:39.827 [2024-07-27 01:13:31.297272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524649 ] 00:05:39.827 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.827 [2024-07-27 01:13:31.360243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.827 [2024-07-27 01:13:31.478065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.201 01:13:32 -- accel/accel.sh@18 -- # out=' 00:05:41.201 SPDK Configuration: 00:05:41.201 Core mask: 0x1 00:05:41.201 00:05:41.201 Accel Perf Configuration: 00:05:41.201 Workload Type: xor 00:05:41.201 Source buffers: 2 00:05:41.201 Transfer size: 4096 bytes 00:05:41.201 Vector count 1 00:05:41.201 Module: software 00:05:41.201 Queue depth: 32 00:05:41.201 Allocate depth: 32 00:05:41.201 # threads/core: 1 00:05:41.201 Run time: 1 seconds 00:05:41.201 Verify: Yes 00:05:41.201 00:05:41.201 Running for 1 seconds... 00:05:41.201 00:05:41.201 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:41.201 ------------------------------------------------------------------------------------ 00:05:41.201 0,0 192576/s 752 MiB/s 0 0 00:05:41.201 ==================================================================================== 00:05:41.201 Total 192576/s 752 MiB/s 0 0' 00:05:41.201 01:13:32 -- accel/accel.sh@20 -- # IFS=: 00:05:41.201 01:13:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:41.201 01:13:32 -- accel/accel.sh@20 -- # read -r var val 00:05:41.201 01:13:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:41.201 01:13:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.201 01:13:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.201 01:13:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.201 01:13:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.201 01:13:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.201 01:13:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.201 01:13:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.201 01:13:32 -- accel/accel.sh@42 -- # jq -r . 00:05:41.201 [2024-07-27 01:13:32.771000] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:41.201 [2024-07-27 01:13:32.771098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524793 ] 00:05:41.201 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.201 [2024-07-27 01:13:32.835035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.201 [2024-07-27 01:13:32.954951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.459 01:13:33 -- accel/accel.sh@21 -- # val= 00:05:41.459 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.459 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.459 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val= 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val=0x1 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val= 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val= 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val=xor 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val=2 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val= 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val=software 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@23 -- # accel_module=software 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val=32 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val=32 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val=1 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val=Yes 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val= 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:41.460 01:13:33 -- accel/accel.sh@21 -- # val= 00:05:41.460 01:13:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # IFS=: 00:05:41.460 01:13:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.831 01:13:34 -- accel/accel.sh@21 -- # val= 00:05:42.831 01:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.831 01:13:34 -- accel/accel.sh@20 -- # IFS=: 00:05:42.831 01:13:34 -- accel/accel.sh@20 -- # read -r var val 00:05:42.831 01:13:34 -- accel/accel.sh@21 -- # val= 00:05:42.831 01:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.831 01:13:34 -- accel/accel.sh@20 -- # IFS=: 00:05:42.831 01:13:34 -- accel/accel.sh@20 -- # read -r var val 00:05:42.831 01:13:34 -- accel/accel.sh@21 -- # val= 00:05:42.831 01:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.831 01:13:34 -- accel/accel.sh@20 -- # IFS=: 00:05:42.832 01:13:34 -- accel/accel.sh@20 -- # read -r var val 00:05:42.832 01:13:34 -- accel/accel.sh@21 -- # val= 00:05:42.832 01:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.832 01:13:34 -- accel/accel.sh@20 -- # IFS=: 00:05:42.832 01:13:34 -- accel/accel.sh@20 -- # read -r var val 00:05:42.832 01:13:34 -- accel/accel.sh@21 -- # val= 00:05:42.832 01:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.832 01:13:34 -- accel/accel.sh@20 -- # IFS=: 00:05:42.832 01:13:34 -- accel/accel.sh@20 -- # read -r var val 00:05:42.832 01:13:34 -- accel/accel.sh@21 -- # val= 00:05:42.832 01:13:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.832 01:13:34 -- accel/accel.sh@20 -- # IFS=: 00:05:42.832 01:13:34 -- accel/accel.sh@20 -- # read -r var val 00:05:42.832 01:13:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:42.832 01:13:34 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:42.832 01:13:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.832 00:05:42.832 real 0m2.955s 00:05:42.832 user 0m2.658s 00:05:42.832 sys 0m0.289s 00:05:42.832 01:13:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.832 01:13:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.832 ************************************ 00:05:42.832 END TEST accel_xor 00:05:42.832 ************************************ 00:05:42.832 01:13:34 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:42.832 01:13:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:42.832 01:13:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.832 01:13:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.832 ************************************ 00:05:42.832 START TEST accel_xor 00:05:42.832 ************************************ 00:05:42.832 01:13:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:05:42.832 01:13:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.832 01:13:34 -- accel/accel.sh@17 -- # local accel_module 00:05:42.832 01:13:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:42.832 01:13:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:42.832 01:13:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.832 01:13:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.832 01:13:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.832 01:13:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.832 01:13:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.832 01:13:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.832 01:13:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.832 01:13:34 -- accel/accel.sh@42 -- # jq -r . 00:05:42.832 [2024-07-27 01:13:34.275347] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:42.832 [2024-07-27 01:13:34.275442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524952 ] 00:05:42.832 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.832 [2024-07-27 01:13:34.340035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.832 [2024-07-27 01:13:34.459798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.212 01:13:35 -- accel/accel.sh@18 -- # out=' 00:05:44.212 SPDK Configuration: 00:05:44.212 Core mask: 0x1 00:05:44.212 00:05:44.212 Accel Perf Configuration: 00:05:44.212 Workload Type: xor 00:05:44.212 Source buffers: 3 00:05:44.212 Transfer size: 4096 bytes 00:05:44.212 Vector count 1 00:05:44.212 Module: software 00:05:44.212 Queue depth: 32 00:05:44.212 Allocate depth: 32 00:05:44.212 # threads/core: 1 00:05:44.212 Run time: 1 seconds 00:05:44.212 Verify: Yes 00:05:44.212 00:05:44.212 Running for 1 seconds... 00:05:44.212 00:05:44.212 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:44.212 ------------------------------------------------------------------------------------ 00:05:44.212 0,0 184192/s 719 MiB/s 0 0 00:05:44.212 ==================================================================================== 00:05:44.212 Total 184192/s 719 MiB/s 0 0' 00:05:44.212 01:13:35 -- accel/accel.sh@20 -- # IFS=: 00:05:44.212 01:13:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:44.212 01:13:35 -- accel/accel.sh@20 -- # read -r var val 00:05:44.212 01:13:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:44.212 01:13:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.212 01:13:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.212 01:13:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.212 01:13:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.212 01:13:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.212 01:13:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.212 01:13:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.212 01:13:35 -- accel/accel.sh@42 -- # jq -r . 00:05:44.212 [2024-07-27 01:13:35.753903] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:44.212 [2024-07-27 01:13:35.753996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525217 ] 00:05:44.212 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.212 [2024-07-27 01:13:35.816885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.212 [2024-07-27 01:13:35.936190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.471 01:13:35 -- accel/accel.sh@21 -- # val= 00:05:44.471 01:13:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:35 -- accel/accel.sh@21 -- # val= 00:05:44.471 01:13:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:35 -- accel/accel.sh@21 -- # val=0x1 00:05:44.471 01:13:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:35 -- accel/accel.sh@21 -- # val= 00:05:44.471 01:13:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:35 -- accel/accel.sh@21 -- # val= 00:05:44.471 01:13:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:35 -- accel/accel.sh@21 -- # val=xor 00:05:44.471 01:13:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:35 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:35 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val=3 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val= 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val=software 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@23 -- # accel_module=software 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val=32 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val=32 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val=1 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val=Yes 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val= 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:44.471 01:13:36 -- accel/accel.sh@21 -- # val= 00:05:44.471 01:13:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # IFS=: 00:05:44.471 01:13:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.846 01:13:37 -- accel/accel.sh@21 -- # val= 00:05:45.846 01:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # IFS=: 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # read -r var val 00:05:45.846 01:13:37 -- accel/accel.sh@21 -- # val= 00:05:45.846 01:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # IFS=: 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # read -r var val 00:05:45.846 01:13:37 -- accel/accel.sh@21 -- # val= 00:05:45.846 01:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # IFS=: 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # read -r var val 00:05:45.846 01:13:37 -- accel/accel.sh@21 -- # val= 00:05:45.846 01:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # IFS=: 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # read -r var val 00:05:45.846 01:13:37 -- accel/accel.sh@21 -- # val= 00:05:45.846 01:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # IFS=: 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # read -r var val 00:05:45.846 01:13:37 -- accel/accel.sh@21 -- # val= 00:05:45.846 01:13:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # IFS=: 00:05:45.846 01:13:37 -- accel/accel.sh@20 -- # read -r var val 00:05:45.847 01:13:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:45.847 01:13:37 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:45.847 01:13:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.847 00:05:45.847 real 0m2.962s 00:05:45.847 user 0m2.668s 00:05:45.847 sys 0m0.286s 00:05:45.847 01:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.847 01:13:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.847 ************************************ 00:05:45.847 END TEST accel_xor 00:05:45.847 ************************************ 00:05:45.847 01:13:37 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:45.847 01:13:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:45.847 01:13:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.847 01:13:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.847 ************************************ 00:05:45.847 START TEST accel_dif_verify 00:05:45.847 ************************************ 00:05:45.847 01:13:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:05:45.847 01:13:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.847 01:13:37 -- accel/accel.sh@17 -- # local accel_module 00:05:45.847 01:13:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:45.847 01:13:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:45.847 01:13:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.847 01:13:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.847 01:13:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.847 01:13:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.847 01:13:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.847 01:13:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.847 01:13:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.847 01:13:37 -- accel/accel.sh@42 -- # jq -r . 00:05:45.847 [2024-07-27 01:13:37.261450] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:45.847 [2024-07-27 01:13:37.261540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525375 ] 00:05:45.847 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.847 [2024-07-27 01:13:37.322698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.847 [2024-07-27 01:13:37.441948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.221 01:13:38 -- accel/accel.sh@18 -- # out=' 00:05:47.221 SPDK Configuration: 00:05:47.221 Core mask: 0x1 00:05:47.221 00:05:47.221 Accel Perf Configuration: 00:05:47.221 Workload Type: dif_verify 00:05:47.221 Vector size: 4096 bytes 00:05:47.221 Transfer size: 4096 bytes 00:05:47.221 Block size: 512 bytes 00:05:47.221 Metadata size: 8 bytes 00:05:47.221 Vector count 1 00:05:47.221 Module: software 00:05:47.221 Queue depth: 32 00:05:47.221 Allocate depth: 32 00:05:47.221 # threads/core: 1 00:05:47.221 Run time: 1 seconds 00:05:47.221 Verify: No 00:05:47.221 00:05:47.221 Running for 1 seconds... 00:05:47.221 00:05:47.221 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:47.221 ------------------------------------------------------------------------------------ 00:05:47.221 0,0 81568/s 323 MiB/s 0 0 00:05:47.221 ==================================================================================== 00:05:47.221 Total 81568/s 318 MiB/s 0 0' 00:05:47.221 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.221 01:13:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:47.221 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.221 01:13:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:47.221 01:13:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.221 01:13:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.221 01:13:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.221 01:13:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.221 01:13:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.221 01:13:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.221 01:13:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.221 01:13:38 -- accel/accel.sh@42 -- # jq -r . 00:05:47.221 [2024-07-27 01:13:38.743256] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:47.221 [2024-07-27 01:13:38.743359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525524 ] 00:05:47.221 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.221 [2024-07-27 01:13:38.804362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.221 [2024-07-27 01:13:38.923952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.478 01:13:38 -- accel/accel.sh@21 -- # val= 00:05:47.478 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.478 01:13:38 -- accel/accel.sh@21 -- # val= 00:05:47.478 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.478 01:13:38 -- accel/accel.sh@21 -- # val=0x1 00:05:47.478 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.478 01:13:38 -- accel/accel.sh@21 -- # val= 00:05:47.478 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.478 01:13:38 -- accel/accel.sh@21 -- # val= 00:05:47.478 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.478 01:13:38 -- accel/accel.sh@21 -- # val=dif_verify 00:05:47.478 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.478 01:13:38 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.478 01:13:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:47.478 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.478 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val= 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val=software 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@23 -- # accel_module=software 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val=32 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val=32 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val=1 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val=No 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val= 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:47.479 01:13:38 -- accel/accel.sh@21 -- # val= 00:05:47.479 01:13:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # IFS=: 00:05:47.479 01:13:38 -- accel/accel.sh@20 -- # read -r var val 00:05:48.852 01:13:40 -- accel/accel.sh@21 -- # val= 00:05:48.852 01:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # IFS=: 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # read -r var val 00:05:48.852 01:13:40 -- accel/accel.sh@21 -- # val= 00:05:48.852 01:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # IFS=: 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # read -r var val 00:05:48.852 01:13:40 -- accel/accel.sh@21 -- # val= 00:05:48.852 01:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # IFS=: 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # read -r var val 00:05:48.852 01:13:40 -- accel/accel.sh@21 -- # val= 00:05:48.852 01:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # IFS=: 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # read -r var val 00:05:48.852 01:13:40 -- accel/accel.sh@21 -- # val= 00:05:48.852 01:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # IFS=: 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # read -r var val 00:05:48.852 01:13:40 -- accel/accel.sh@21 -- # val= 00:05:48.852 01:13:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # IFS=: 00:05:48.852 01:13:40 -- accel/accel.sh@20 -- # read -r var val 00:05:48.852 01:13:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:48.852 01:13:40 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:48.852 01:13:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.852 00:05:48.852 real 0m2.963s 00:05:48.852 user 0m2.675s 00:05:48.852 sys 0m0.282s 00:05:48.852 01:13:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.852 01:13:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.852 ************************************ 00:05:48.852 END TEST accel_dif_verify 00:05:48.852 ************************************ 00:05:48.852 01:13:40 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:48.852 01:13:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:48.852 01:13:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.852 01:13:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.852 ************************************ 00:05:48.852 START TEST accel_dif_generate 00:05:48.852 ************************************ 00:05:48.852 01:13:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:05:48.852 01:13:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.852 01:13:40 -- accel/accel.sh@17 -- # local accel_module 00:05:48.852 01:13:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:48.852 01:13:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:48.852 01:13:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.852 01:13:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.852 01:13:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.852 01:13:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.852 01:13:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.852 01:13:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.852 01:13:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.852 01:13:40 -- accel/accel.sh@42 -- # jq -r . 00:05:48.852 [2024-07-27 01:13:40.247283] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:48.852 [2024-07-27 01:13:40.247373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525799 ] 00:05:48.852 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.852 [2024-07-27 01:13:40.312391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.852 [2024-07-27 01:13:40.433009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.225 01:13:41 -- accel/accel.sh@18 -- # out=' 00:05:50.225 SPDK Configuration: 00:05:50.225 Core mask: 0x1 00:05:50.225 00:05:50.225 Accel Perf Configuration: 00:05:50.225 Workload Type: dif_generate 00:05:50.225 Vector size: 4096 bytes 00:05:50.225 Transfer size: 4096 bytes 00:05:50.225 Block size: 512 bytes 00:05:50.225 Metadata size: 8 bytes 00:05:50.225 Vector count 1 00:05:50.225 Module: software 00:05:50.225 Queue depth: 32 00:05:50.225 Allocate depth: 32 00:05:50.225 # threads/core: 1 00:05:50.225 Run time: 1 seconds 00:05:50.225 Verify: No 00:05:50.225 00:05:50.225 Running for 1 seconds... 00:05:50.225 00:05:50.225 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:50.225 ------------------------------------------------------------------------------------ 00:05:50.225 0,0 96032/s 380 MiB/s 0 0 00:05:50.225 ==================================================================================== 00:05:50.225 Total 96032/s 375 MiB/s 0 0' 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.225 01:13:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.225 01:13:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:50.225 01:13:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.225 01:13:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.225 01:13:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.225 01:13:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.225 01:13:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.225 01:13:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.225 01:13:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.225 01:13:41 -- accel/accel.sh@42 -- # jq -r . 00:05:50.225 [2024-07-27 01:13:41.734709] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:50.225 [2024-07-27 01:13:41.734801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525945 ] 00:05:50.225 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.225 [2024-07-27 01:13:41.798924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.225 [2024-07-27 01:13:41.917157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.225 01:13:41 -- accel/accel.sh@21 -- # val= 00:05:50.225 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.225 01:13:41 -- accel/accel.sh@21 -- # val= 00:05:50.225 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.225 01:13:41 -- accel/accel.sh@21 -- # val=0x1 00:05:50.225 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.225 01:13:41 -- accel/accel.sh@21 -- # val= 00:05:50.225 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.225 01:13:41 -- accel/accel.sh@21 -- # val= 00:05:50.225 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.225 01:13:41 -- accel/accel.sh@21 -- # val=dif_generate 00:05:50.225 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.225 01:13:41 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.225 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.225 01:13:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val= 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val=software 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@23 -- # accel_module=software 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val=32 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val=32 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val=1 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val=No 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val= 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:50.226 01:13:41 -- accel/accel.sh@21 -- # val= 00:05:50.226 01:13:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.226 01:13:41 -- accel/accel.sh@20 -- # IFS=: 00:05:50.483 01:13:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.856 01:13:43 -- accel/accel.sh@21 -- # val= 00:05:51.856 01:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # IFS=: 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # read -r var val 00:05:51.856 01:13:43 -- accel/accel.sh@21 -- # val= 00:05:51.856 01:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # IFS=: 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # read -r var val 00:05:51.856 01:13:43 -- accel/accel.sh@21 -- # val= 00:05:51.856 01:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # IFS=: 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # read -r var val 00:05:51.856 01:13:43 -- accel/accel.sh@21 -- # val= 00:05:51.856 01:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # IFS=: 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # read -r var val 00:05:51.856 01:13:43 -- accel/accel.sh@21 -- # val= 00:05:51.856 01:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # IFS=: 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # read -r var val 00:05:51.856 01:13:43 -- accel/accel.sh@21 -- # val= 00:05:51.856 01:13:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # IFS=: 00:05:51.856 01:13:43 -- accel/accel.sh@20 -- # read -r var val 00:05:51.856 01:13:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:51.856 01:13:43 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:51.856 01:13:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.856 00:05:51.856 real 0m2.957s 00:05:51.856 user 0m2.652s 00:05:51.856 sys 0m0.298s 00:05:51.856 01:13:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.856 01:13:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.856 ************************************ 00:05:51.856 END TEST accel_dif_generate 00:05:51.856 ************************************ 00:05:51.856 01:13:43 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:51.856 01:13:43 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:51.856 01:13:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.856 01:13:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.856 ************************************ 00:05:51.856 START TEST accel_dif_generate_copy 00:05:51.856 ************************************ 00:05:51.856 01:13:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:05:51.856 01:13:43 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.856 01:13:43 -- accel/accel.sh@17 -- # local accel_module 00:05:51.856 01:13:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:51.856 01:13:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:51.856 01:13:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.856 01:13:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.856 01:13:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.856 01:13:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.856 01:13:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.856 01:13:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.856 01:13:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.856 01:13:43 -- accel/accel.sh@42 -- # jq -r . 00:05:51.856 [2024-07-27 01:13:43.229564] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:51.856 [2024-07-27 01:13:43.229652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526103 ] 00:05:51.856 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.856 [2024-07-27 01:13:43.290615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.856 [2024-07-27 01:13:43.408563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.229 01:13:44 -- accel/accel.sh@18 -- # out=' 00:05:53.229 SPDK Configuration: 00:05:53.229 Core mask: 0x1 00:05:53.229 00:05:53.229 Accel Perf Configuration: 00:05:53.229 Workload Type: dif_generate_copy 00:05:53.229 Vector size: 4096 bytes 00:05:53.229 Transfer size: 4096 bytes 00:05:53.229 Vector count 1 00:05:53.229 Module: software 00:05:53.229 Queue depth: 32 00:05:53.229 Allocate depth: 32 00:05:53.229 # threads/core: 1 00:05:53.229 Run time: 1 seconds 00:05:53.229 Verify: No 00:05:53.229 00:05:53.229 Running for 1 seconds... 00:05:53.229 00:05:53.229 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:53.229 ------------------------------------------------------------------------------------ 00:05:53.229 0,0 76256/s 302 MiB/s 0 0 00:05:53.229 ==================================================================================== 00:05:53.229 Total 76256/s 297 MiB/s 0 0' 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:53.229 01:13:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.229 01:13:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.229 01:13:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.229 01:13:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.229 01:13:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.229 01:13:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.229 01:13:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.229 01:13:44 -- accel/accel.sh@42 -- # jq -r . 00:05:53.229 [2024-07-27 01:13:44.699301] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:53.229 [2024-07-27 01:13:44.699388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526323 ] 00:05:53.229 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.229 [2024-07-27 01:13:44.762856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.229 [2024-07-27 01:13:44.882421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val= 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val= 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val=0x1 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val= 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val= 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val= 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val=software 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@23 -- # accel_module=software 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val=32 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.229 01:13:44 -- accel/accel.sh@21 -- # val=32 00:05:53.229 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.229 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.230 01:13:44 -- accel/accel.sh@21 -- # val=1 00:05:53.230 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.230 01:13:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:53.230 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.230 01:13:44 -- accel/accel.sh@21 -- # val=No 00:05:53.230 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.230 01:13:44 -- accel/accel.sh@21 -- # val= 00:05:53.230 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.230 01:13:44 -- accel/accel.sh@21 -- # val= 00:05:53.230 01:13:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.230 01:13:44 -- accel/accel.sh@20 -- # read -r var val 00:05:54.604 01:13:46 -- accel/accel.sh@21 -- # val= 00:05:54.604 01:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # IFS=: 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # read -r var val 00:05:54.604 01:13:46 -- accel/accel.sh@21 -- # val= 00:05:54.604 01:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # IFS=: 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # read -r var val 00:05:54.604 01:13:46 -- accel/accel.sh@21 -- # val= 00:05:54.604 01:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # IFS=: 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # read -r var val 00:05:54.604 01:13:46 -- accel/accel.sh@21 -- # val= 00:05:54.604 01:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # IFS=: 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # read -r var val 00:05:54.604 01:13:46 -- accel/accel.sh@21 -- # val= 00:05:54.604 01:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # IFS=: 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # read -r var val 00:05:54.604 01:13:46 -- accel/accel.sh@21 -- # val= 00:05:54.604 01:13:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # IFS=: 00:05:54.604 01:13:46 -- accel/accel.sh@20 -- # read -r var val 00:05:54.604 01:13:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:54.604 01:13:46 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:54.604 01:13:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.604 00:05:54.604 real 0m2.956s 00:05:54.604 user 0m2.640s 00:05:54.604 sys 0m0.308s 00:05:54.604 01:13:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.604 01:13:46 -- common/autotest_common.sh@10 -- # set +x 00:05:54.604 ************************************ 00:05:54.604 END TEST accel_dif_generate_copy 00:05:54.604 ************************************ 00:05:54.604 01:13:46 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:54.604 01:13:46 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.604 01:13:46 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:54.604 01:13:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.604 01:13:46 -- common/autotest_common.sh@10 -- # set +x 00:05:54.604 ************************************ 00:05:54.604 START TEST accel_comp 00:05:54.604 ************************************ 00:05:54.604 01:13:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.604 01:13:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.604 01:13:46 -- accel/accel.sh@17 -- # local accel_module 00:05:54.604 01:13:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.604 01:13:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.604 01:13:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.604 01:13:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.604 01:13:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.604 01:13:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.604 01:13:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.604 01:13:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.604 01:13:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.604 01:13:46 -- accel/accel.sh@42 -- # jq -r . 00:05:54.604 [2024-07-27 01:13:46.210680] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:54.604 [2024-07-27 01:13:46.210766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526526 ] 00:05:54.604 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.604 [2024-07-27 01:13:46.272462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.862 [2024-07-27 01:13:46.392975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.237 01:13:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:56.237 00:05:56.237 SPDK Configuration: 00:05:56.237 Core mask: 0x1 00:05:56.237 00:05:56.237 Accel Perf Configuration: 00:05:56.237 Workload Type: compress 00:05:56.237 Transfer size: 4096 bytes 00:05:56.237 Vector count 1 00:05:56.237 Module: software 00:05:56.237 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.237 Queue depth: 32 00:05:56.237 Allocate depth: 32 00:05:56.237 # threads/core: 1 00:05:56.237 Run time: 1 seconds 00:05:56.237 Verify: No 00:05:56.237 00:05:56.237 Running for 1 seconds... 00:05:56.237 00:05:56.237 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.237 ------------------------------------------------------------------------------------ 00:05:56.237 0,0 32256/s 134 MiB/s 0 0 00:05:56.237 ==================================================================================== 00:05:56.237 Total 32256/s 126 MiB/s 0 0' 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.237 01:13:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.237 01:13:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.237 01:13:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.237 01:13:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.237 01:13:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.237 01:13:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.237 01:13:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.237 01:13:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.237 01:13:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.237 01:13:47 -- accel/accel.sh@42 -- # jq -r . 00:05:56.237 [2024-07-27 01:13:47.699436] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:56.237 [2024-07-27 01:13:47.699531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526672 ] 00:05:56.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.237 [2024-07-27 01:13:47.761878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.237 [2024-07-27 01:13:47.878128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.237 01:13:47 -- accel/accel.sh@21 -- # val= 00:05:56.237 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.237 01:13:47 -- accel/accel.sh@21 -- # val= 00:05:56.237 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.237 01:13:47 -- accel/accel.sh@21 -- # val= 00:05:56.237 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.237 01:13:47 -- accel/accel.sh@21 -- # val=0x1 00:05:56.237 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.237 01:13:47 -- accel/accel.sh@21 -- # val= 00:05:56.237 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.237 01:13:47 -- accel/accel.sh@21 -- # val= 00:05:56.237 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.237 01:13:47 -- accel/accel.sh@21 -- # val=compress 00:05:56.237 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.237 01:13:47 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.237 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.237 01:13:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.237 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val= 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val=software 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val=32 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val=32 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val=1 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val=No 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val= 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.238 01:13:47 -- accel/accel.sh@21 -- # val= 00:05:56.238 01:13:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.238 01:13:47 -- accel/accel.sh@20 -- # read -r var val 00:05:57.612 01:13:49 -- accel/accel.sh@21 -- # val= 00:05:57.612 01:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # IFS=: 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # read -r var val 00:05:57.612 01:13:49 -- accel/accel.sh@21 -- # val= 00:05:57.612 01:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # IFS=: 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # read -r var val 00:05:57.612 01:13:49 -- accel/accel.sh@21 -- # val= 00:05:57.612 01:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # IFS=: 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # read -r var val 00:05:57.612 01:13:49 -- accel/accel.sh@21 -- # val= 00:05:57.612 01:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # IFS=: 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # read -r var val 00:05:57.612 01:13:49 -- accel/accel.sh@21 -- # val= 00:05:57.612 01:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # IFS=: 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # read -r var val 00:05:57.612 01:13:49 -- accel/accel.sh@21 -- # val= 00:05:57.612 01:13:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # IFS=: 00:05:57.612 01:13:49 -- accel/accel.sh@20 -- # read -r var val 00:05:57.612 01:13:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:57.612 01:13:49 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:57.612 01:13:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.612 00:05:57.612 real 0m2.958s 00:05:57.612 user 0m2.662s 00:05:57.612 sys 0m0.289s 00:05:57.612 01:13:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.612 01:13:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.612 ************************************ 00:05:57.612 END TEST accel_comp 00:05:57.612 ************************************ 00:05:57.612 01:13:49 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.612 01:13:49 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:57.612 01:13:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.612 01:13:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.612 ************************************ 00:05:57.612 START TEST accel_decomp 00:05:57.612 ************************************ 00:05:57.612 01:13:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.612 01:13:49 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.612 01:13:49 -- accel/accel.sh@17 -- # local accel_module 00:05:57.612 01:13:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.612 01:13:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:57.612 01:13:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.612 01:13:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.612 01:13:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.612 01:13:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.612 01:13:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.612 01:13:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.612 01:13:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.612 01:13:49 -- accel/accel.sh@42 -- # jq -r . 00:05:57.612 [2024-07-27 01:13:49.192168] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:57.612 [2024-07-27 01:13:49.192258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526873 ] 00:05:57.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.612 [2024-07-27 01:13:49.255248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.871 [2024-07-27 01:13:49.375751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.245 01:13:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:59.245 00:05:59.245 SPDK Configuration: 00:05:59.245 Core mask: 0x1 00:05:59.245 00:05:59.245 Accel Perf Configuration: 00:05:59.245 Workload Type: decompress 00:05:59.245 Transfer size: 4096 bytes 00:05:59.245 Vector count 1 00:05:59.245 Module: software 00:05:59.245 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.245 Queue depth: 32 00:05:59.245 Allocate depth: 32 00:05:59.245 # threads/core: 1 00:05:59.245 Run time: 1 seconds 00:05:59.245 Verify: Yes 00:05:59.245 00:05:59.245 Running for 1 seconds... 00:05:59.245 00:05:59.245 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.245 ------------------------------------------------------------------------------------ 00:05:59.245 0,0 55456/s 102 MiB/s 0 0 00:05:59.245 ==================================================================================== 00:05:59.245 Total 55456/s 216 MiB/s 0 0' 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:59.245 01:13:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.245 01:13:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.245 01:13:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.245 01:13:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.245 01:13:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.245 01:13:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.245 01:13:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.245 01:13:50 -- accel/accel.sh@42 -- # jq -r . 00:05:59.245 [2024-07-27 01:13:50.680977] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:59.245 [2024-07-27 01:13:50.681076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527101 ] 00:05:59.245 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.245 [2024-07-27 01:13:50.744224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.245 [2024-07-27 01:13:50.863897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val= 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val= 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val= 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val=0x1 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val= 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val= 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val=decompress 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val= 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val=software 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val=32 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val=32 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val=1 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val=Yes 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val= 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.245 01:13:50 -- accel/accel.sh@21 -- # val= 00:05:59.245 01:13:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.245 01:13:50 -- accel/accel.sh@20 -- # read -r var val 00:06:00.619 01:13:52 -- accel/accel.sh@21 -- # val= 00:06:00.619 01:13:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # IFS=: 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # read -r var val 00:06:00.619 01:13:52 -- accel/accel.sh@21 -- # val= 00:06:00.619 01:13:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # IFS=: 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # read -r var val 00:06:00.619 01:13:52 -- accel/accel.sh@21 -- # val= 00:06:00.619 01:13:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # IFS=: 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # read -r var val 00:06:00.619 01:13:52 -- accel/accel.sh@21 -- # val= 00:06:00.619 01:13:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # IFS=: 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # read -r var val 00:06:00.619 01:13:52 -- accel/accel.sh@21 -- # val= 00:06:00.619 01:13:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # IFS=: 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # read -r var val 00:06:00.619 01:13:52 -- accel/accel.sh@21 -- # val= 00:06:00.619 01:13:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # IFS=: 00:06:00.619 01:13:52 -- accel/accel.sh@20 -- # read -r var val 00:06:00.619 01:13:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.619 01:13:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:00.619 01:13:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.619 00:06:00.619 real 0m2.979s 00:06:00.619 user 0m2.685s 00:06:00.619 sys 0m0.287s 00:06:00.619 01:13:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.619 01:13:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.619 ************************************ 00:06:00.619 END TEST accel_decomp 00:06:00.619 ************************************ 00:06:00.619 01:13:52 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.619 01:13:52 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:00.619 01:13:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.619 01:13:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.619 ************************************ 00:06:00.619 START TEST accel_decmop_full 00:06:00.619 ************************************ 00:06:00.619 01:13:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.619 01:13:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.619 01:13:52 -- accel/accel.sh@17 -- # local accel_module 00:06:00.619 01:13:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.619 01:13:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.619 01:13:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.619 01:13:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.619 01:13:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.619 01:13:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.619 01:13:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.619 01:13:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.619 01:13:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.619 01:13:52 -- accel/accel.sh@42 -- # jq -r . 00:06:00.619 [2024-07-27 01:13:52.192859] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:00.619 [2024-07-27 01:13:52.192950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527254 ] 00:06:00.619 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.619 [2024-07-27 01:13:52.258238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.878 [2024-07-27 01:13:52.379733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.270 01:13:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:02.270 00:06:02.270 SPDK Configuration: 00:06:02.270 Core mask: 0x1 00:06:02.270 00:06:02.270 Accel Perf Configuration: 00:06:02.270 Workload Type: decompress 00:06:02.270 Transfer size: 111250 bytes 00:06:02.270 Vector count 1 00:06:02.270 Module: software 00:06:02.270 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.270 Queue depth: 32 00:06:02.270 Allocate depth: 32 00:06:02.270 # threads/core: 1 00:06:02.270 Run time: 1 seconds 00:06:02.270 Verify: Yes 00:06:02.270 00:06:02.270 Running for 1 seconds... 00:06:02.270 00:06:02.270 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.270 ------------------------------------------------------------------------------------ 00:06:02.270 0,0 3808/s 157 MiB/s 0 0 00:06:02.270 ==================================================================================== 00:06:02.270 Total 3808/s 404 MiB/s 0 0' 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.270 01:13:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.270 01:13:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.270 01:13:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.270 01:13:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.270 01:13:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.270 01:13:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.270 01:13:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.270 01:13:53 -- accel/accel.sh@42 -- # jq -r . 00:06:02.270 [2024-07-27 01:13:53.694419] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:02.270 [2024-07-27 01:13:53.694508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527448 ] 00:06:02.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.270 [2024-07-27 01:13:53.760968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.270 [2024-07-27 01:13:53.880402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val= 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val= 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val= 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val=0x1 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val= 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val= 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val=decompress 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val= 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val=software 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val=32 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val=32 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val=1 00:06:02.270 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.270 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.270 01:13:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.271 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.271 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.271 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.271 01:13:53 -- accel/accel.sh@21 -- # val=Yes 00:06:02.271 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.271 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.271 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.271 01:13:53 -- accel/accel.sh@21 -- # val= 00:06:02.271 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.271 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.271 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:02.271 01:13:53 -- accel/accel.sh@21 -- # val= 00:06:02.271 01:13:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.271 01:13:53 -- accel/accel.sh@20 -- # IFS=: 00:06:02.271 01:13:53 -- accel/accel.sh@20 -- # read -r var val 00:06:03.646 01:13:55 -- accel/accel.sh@21 -- # val= 00:06:03.646 01:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:03.646 01:13:55 -- accel/accel.sh@21 -- # val= 00:06:03.646 01:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:03.646 01:13:55 -- accel/accel.sh@21 -- # val= 00:06:03.646 01:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:03.646 01:13:55 -- accel/accel.sh@21 -- # val= 00:06:03.646 01:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:03.646 01:13:55 -- accel/accel.sh@21 -- # val= 00:06:03.646 01:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:03.646 01:13:55 -- accel/accel.sh@21 -- # val= 00:06:03.646 01:13:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # IFS=: 00:06:03.646 01:13:55 -- accel/accel.sh@20 -- # read -r var val 00:06:03.646 01:13:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.646 01:13:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:03.646 01:13:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.646 00:06:03.646 real 0m3.007s 00:06:03.646 user 0m2.708s 00:06:03.646 sys 0m0.292s 00:06:03.646 01:13:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.646 01:13:55 -- common/autotest_common.sh@10 -- # set +x 00:06:03.646 ************************************ 00:06:03.646 END TEST accel_decmop_full 00:06:03.646 ************************************ 00:06:03.646 01:13:55 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.646 01:13:55 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:03.646 01:13:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.646 01:13:55 -- common/autotest_common.sh@10 -- # set +x 00:06:03.646 ************************************ 00:06:03.646 START TEST accel_decomp_mcore 00:06:03.646 ************************************ 00:06:03.646 01:13:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.646 01:13:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.646 01:13:55 -- accel/accel.sh@17 -- # local accel_module 00:06:03.646 01:13:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.646 01:13:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.646 01:13:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.646 01:13:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.646 01:13:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.646 01:13:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.646 01:13:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.646 01:13:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.646 01:13:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.646 01:13:55 -- accel/accel.sh@42 -- # jq -r . 00:06:03.646 [2024-07-27 01:13:55.224731] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:03.646 [2024-07-27 01:13:55.224821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527684 ] 00:06:03.646 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.646 [2024-07-27 01:13:55.287078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.905 [2024-07-27 01:13:55.410484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.905 [2024-07-27 01:13:55.410539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.905 [2024-07-27 01:13:55.410593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.905 [2024-07-27 01:13:55.410596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.279 01:13:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:05.279 00:06:05.279 SPDK Configuration: 00:06:05.279 Core mask: 0xf 00:06:05.279 00:06:05.279 Accel Perf Configuration: 00:06:05.279 Workload Type: decompress 00:06:05.279 Transfer size: 4096 bytes 00:06:05.279 Vector count 1 00:06:05.279 Module: software 00:06:05.279 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:05.279 Queue depth: 32 00:06:05.279 Allocate depth: 32 00:06:05.279 # threads/core: 1 00:06:05.279 Run time: 1 seconds 00:06:05.279 Verify: Yes 00:06:05.279 00:06:05.279 Running for 1 seconds... 00:06:05.279 00:06:05.279 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:05.279 ------------------------------------------------------------------------------------ 00:06:05.279 0,0 50368/s 92 MiB/s 0 0 00:06:05.279 3,0 50880/s 93 MiB/s 0 0 00:06:05.279 2,0 50848/s 93 MiB/s 0 0 00:06:05.279 1,0 50784/s 93 MiB/s 0 0 00:06:05.279 ==================================================================================== 00:06:05.279 Total 202880/s 792 MiB/s 0 0' 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.279 01:13:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.279 01:13:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.279 01:13:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.279 01:13:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.279 01:13:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.279 01:13:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.279 01:13:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.279 01:13:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.279 01:13:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.279 01:13:56 -- accel/accel.sh@42 -- # jq -r . 00:06:05.279 [2024-07-27 01:13:56.723549] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:05.279 [2024-07-27 01:13:56.723643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527833 ] 00:06:05.279 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.279 [2024-07-27 01:13:56.786082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.279 [2024-07-27 01:13:56.908954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.279 [2024-07-27 01:13:56.909008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.279 [2024-07-27 01:13:56.909074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.279 [2024-07-27 01:13:56.909082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.279 01:13:56 -- accel/accel.sh@21 -- # val= 00:06:05.279 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.279 01:13:56 -- accel/accel.sh@21 -- # val= 00:06:05.279 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.279 01:13:56 -- accel/accel.sh@21 -- # val= 00:06:05.279 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.279 01:13:56 -- accel/accel.sh@21 -- # val=0xf 00:06:05.279 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.279 01:13:56 -- accel/accel.sh@21 -- # val= 00:06:05.279 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.279 01:13:56 -- accel/accel.sh@21 -- # val= 00:06:05.279 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.279 01:13:56 -- accel/accel.sh@21 -- # val=decompress 00:06:05.279 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.279 01:13:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.279 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val= 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val=software 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val=32 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val=32 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val=1 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val=Yes 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val= 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:05.280 01:13:56 -- accel/accel.sh@21 -- # val= 00:06:05.280 01:13:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # IFS=: 00:06:05.280 01:13:56 -- accel/accel.sh@20 -- # read -r var val 00:06:06.654 01:13:58 -- accel/accel.sh@21 -- # val= 00:06:06.654 01:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.654 01:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:06.654 01:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:06.655 01:13:58 -- accel/accel.sh@21 -- # val= 00:06:06.655 01:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:06.655 01:13:58 -- accel/accel.sh@21 -- # val= 00:06:06.655 01:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:06.655 01:13:58 -- accel/accel.sh@21 -- # val= 00:06:06.655 01:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:06.655 01:13:58 -- accel/accel.sh@21 -- # val= 00:06:06.655 01:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:06.655 01:13:58 -- accel/accel.sh@21 -- # val= 00:06:06.655 01:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:06.655 01:13:58 -- accel/accel.sh@21 -- # val= 00:06:06.655 01:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:06.655 01:13:58 -- accel/accel.sh@21 -- # val= 00:06:06.655 01:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:06.655 01:13:58 -- accel/accel.sh@21 -- # val= 00:06:06.655 01:13:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # IFS=: 00:06:06.655 01:13:58 -- accel/accel.sh@20 -- # read -r var val 00:06:06.655 01:13:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.655 01:13:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:06.655 01:13:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.655 00:06:06.655 real 0m2.996s 00:06:06.655 user 0m9.632s 00:06:06.655 sys 0m0.305s 00:06:06.655 01:13:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.655 01:13:58 -- common/autotest_common.sh@10 -- # set +x 00:06:06.655 ************************************ 00:06:06.655 END TEST accel_decomp_mcore 00:06:06.655 ************************************ 00:06:06.655 01:13:58 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.655 01:13:58 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:06.655 01:13:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.655 01:13:58 -- common/autotest_common.sh@10 -- # set +x 00:06:06.655 ************************************ 00:06:06.655 START TEST accel_decomp_full_mcore 00:06:06.655 ************************************ 00:06:06.655 01:13:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.655 01:13:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.655 01:13:58 -- accel/accel.sh@17 -- # local accel_module 00:06:06.655 01:13:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.655 01:13:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.655 01:13:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.655 01:13:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.655 01:13:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.655 01:13:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.655 01:13:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.655 01:13:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.655 01:13:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.655 01:13:58 -- accel/accel.sh@42 -- # jq -r . 00:06:06.655 [2024-07-27 01:13:58.249244] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:06.655 [2024-07-27 01:13:58.249324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528081 ] 00:06:06.655 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.655 [2024-07-27 01:13:58.315577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.913 [2024-07-27 01:13:58.440172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.913 [2024-07-27 01:13:58.440228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.913 [2024-07-27 01:13:58.440279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.913 [2024-07-27 01:13:58.440282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.288 01:13:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:08.288 00:06:08.288 SPDK Configuration: 00:06:08.288 Core mask: 0xf 00:06:08.288 00:06:08.288 Accel Perf Configuration: 00:06:08.288 Workload Type: decompress 00:06:08.288 Transfer size: 111250 bytes 00:06:08.288 Vector count 1 00:06:08.288 Module: software 00:06:08.288 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.288 Queue depth: 32 00:06:08.288 Allocate depth: 32 00:06:08.288 # threads/core: 1 00:06:08.288 Run time: 1 seconds 00:06:08.288 Verify: Yes 00:06:08.288 00:06:08.288 Running for 1 seconds... 00:06:08.288 00:06:08.288 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.288 ------------------------------------------------------------------------------------ 00:06:08.288 0,0 3808/s 157 MiB/s 0 0 00:06:08.288 3,0 3808/s 157 MiB/s 0 0 00:06:08.288 2,0 3808/s 157 MiB/s 0 0 00:06:08.288 1,0 3808/s 157 MiB/s 0 0 00:06:08.288 ==================================================================================== 00:06:08.288 Total 15232/s 1616 MiB/s 0 0' 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.288 01:13:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.288 01:13:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:08.288 01:13:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.288 01:13:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.288 01:13:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.288 01:13:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.288 01:13:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.288 01:13:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.288 01:13:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.288 01:13:59 -- accel/accel.sh@42 -- # jq -r . 00:06:08.288 [2024-07-27 01:13:59.747659] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:08.288 [2024-07-27 01:13:59.747750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528267 ] 00:06:08.288 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.288 [2024-07-27 01:13:59.810186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.288 [2024-07-27 01:13:59.932784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.288 [2024-07-27 01:13:59.932828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.288 [2024-07-27 01:13:59.932883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.288 [2024-07-27 01:13:59.932887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.288 01:13:59 -- accel/accel.sh@21 -- # val= 00:06:08.288 01:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.288 01:13:59 -- accel/accel.sh@21 -- # val= 00:06:08.288 01:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.288 01:13:59 -- accel/accel.sh@21 -- # val= 00:06:08.288 01:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.288 01:13:59 -- accel/accel.sh@21 -- # val=0xf 00:06:08.288 01:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.288 01:13:59 -- accel/accel.sh@21 -- # val= 00:06:08.288 01:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.288 01:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:13:59 -- accel/accel.sh@21 -- # val= 00:06:08.289 01:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:13:59 -- accel/accel.sh@21 -- # val=decompress 00:06:08.289 01:13:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:13:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:08.289 01:13:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:13:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val= 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val=software 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val=32 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val=32 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val=1 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val=Yes 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val= 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:08.289 01:14:00 -- accel/accel.sh@21 -- # val= 00:06:08.289 01:14:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # IFS=: 00:06:08.289 01:14:00 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@21 -- # val= 00:06:09.659 01:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@21 -- # val= 00:06:09.659 01:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@21 -- # val= 00:06:09.659 01:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@21 -- # val= 00:06:09.659 01:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@21 -- # val= 00:06:09.659 01:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@21 -- # val= 00:06:09.659 01:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@21 -- # val= 00:06:09.659 01:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@21 -- # val= 00:06:09.659 01:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@21 -- # val= 00:06:09.659 01:14:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # IFS=: 00:06:09.659 01:14:01 -- accel/accel.sh@20 -- # read -r var val 00:06:09.659 01:14:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.659 01:14:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:09.659 01:14:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.659 00:06:09.659 real 0m3.010s 00:06:09.659 user 0m9.637s 00:06:09.659 sys 0m0.327s 00:06:09.659 01:14:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.659 01:14:01 -- common/autotest_common.sh@10 -- # set +x 00:06:09.659 ************************************ 00:06:09.659 END TEST accel_decomp_full_mcore 00:06:09.659 ************************************ 00:06:09.659 01:14:01 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:09.659 01:14:01 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:09.659 01:14:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.659 01:14:01 -- common/autotest_common.sh@10 -- # set +x 00:06:09.659 ************************************ 00:06:09.659 START TEST accel_decomp_mthread 00:06:09.659 ************************************ 00:06:09.659 01:14:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:09.659 01:14:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.659 01:14:01 -- accel/accel.sh@17 -- # local accel_module 00:06:09.659 01:14:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:09.659 01:14:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:09.659 01:14:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.659 01:14:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.659 01:14:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.659 01:14:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.659 01:14:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.659 01:14:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.659 01:14:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.659 01:14:01 -- accel/accel.sh@42 -- # jq -r . 00:06:09.659 [2024-07-27 01:14:01.286253] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:09.659 [2024-07-27 01:14:01.286344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528423 ] 00:06:09.659 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.659 [2024-07-27 01:14:01.347766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.917 [2024-07-27 01:14:01.467978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.289 01:14:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:11.289 00:06:11.289 SPDK Configuration: 00:06:11.289 Core mask: 0x1 00:06:11.289 00:06:11.289 Accel Perf Configuration: 00:06:11.289 Workload Type: decompress 00:06:11.289 Transfer size: 4096 bytes 00:06:11.289 Vector count 1 00:06:11.289 Module: software 00:06:11.289 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.289 Queue depth: 32 00:06:11.289 Allocate depth: 32 00:06:11.289 # threads/core: 2 00:06:11.289 Run time: 1 seconds 00:06:11.289 Verify: Yes 00:06:11.289 00:06:11.289 Running for 1 seconds... 00:06:11.289 00:06:11.289 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:11.289 ------------------------------------------------------------------------------------ 00:06:11.289 0,1 28096/s 51 MiB/s 0 0 00:06:11.289 0,0 28000/s 51 MiB/s 0 0 00:06:11.289 ==================================================================================== 00:06:11.289 Total 56096/s 219 MiB/s 0 0' 00:06:11.289 01:14:02 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:11.289 01:14:02 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:11.289 01:14:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.289 01:14:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.289 01:14:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.289 01:14:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.289 01:14:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.289 01:14:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.289 01:14:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.289 01:14:02 -- accel/accel.sh@42 -- # jq -r . 00:06:11.289 [2024-07-27 01:14:02.777917] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:11.289 [2024-07-27 01:14:02.778003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528631 ] 00:06:11.289 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.289 [2024-07-27 01:14:02.843804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.289 [2024-07-27 01:14:02.962824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val= 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val= 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val= 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val=0x1 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val= 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val= 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val=decompress 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val= 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val=software 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val=32 00:06:11.289 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.289 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.289 01:14:03 -- accel/accel.sh@21 -- # val=32 00:06:11.290 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.290 01:14:03 -- accel/accel.sh@21 -- # val=2 00:06:11.290 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.290 01:14:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.290 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.290 01:14:03 -- accel/accel.sh@21 -- # val=Yes 00:06:11.290 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.290 01:14:03 -- accel/accel.sh@21 -- # val= 00:06:11.290 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:11.290 01:14:03 -- accel/accel.sh@21 -- # val= 00:06:11.290 01:14:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # IFS=: 00:06:11.290 01:14:03 -- accel/accel.sh@20 -- # read -r var val 00:06:12.661 01:14:04 -- accel/accel.sh@21 -- # val= 00:06:12.661 01:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:12.661 01:14:04 -- accel/accel.sh@21 -- # val= 00:06:12.661 01:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:12.661 01:14:04 -- accel/accel.sh@21 -- # val= 00:06:12.661 01:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:12.661 01:14:04 -- accel/accel.sh@21 -- # val= 00:06:12.661 01:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:12.661 01:14:04 -- accel/accel.sh@21 -- # val= 00:06:12.661 01:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:12.661 01:14:04 -- accel/accel.sh@21 -- # val= 00:06:12.661 01:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:12.661 01:14:04 -- accel/accel.sh@21 -- # val= 00:06:12.661 01:14:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # IFS=: 00:06:12.661 01:14:04 -- accel/accel.sh@20 -- # read -r var val 00:06:12.661 01:14:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.661 01:14:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:12.661 01:14:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.661 00:06:12.661 real 0m2.976s 00:06:12.661 user 0m2.689s 00:06:12.661 sys 0m0.280s 00:06:12.661 01:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.661 01:14:04 -- common/autotest_common.sh@10 -- # set +x 00:06:12.661 ************************************ 00:06:12.661 END TEST accel_decomp_mthread 00:06:12.661 ************************************ 00:06:12.661 01:14:04 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.661 01:14:04 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:12.661 01:14:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.661 01:14:04 -- common/autotest_common.sh@10 -- # set +x 00:06:12.661 ************************************ 00:06:12.661 START TEST accel_deomp_full_mthread 00:06:12.661 ************************************ 00:06:12.661 01:14:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.661 01:14:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.661 01:14:04 -- accel/accel.sh@17 -- # local accel_module 00:06:12.661 01:14:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.661 01:14:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.661 01:14:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.661 01:14:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.661 01:14:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.661 01:14:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.661 01:14:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.661 01:14:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.661 01:14:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.661 01:14:04 -- accel/accel.sh@42 -- # jq -r . 00:06:12.661 [2024-07-27 01:14:04.286737] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:12.661 [2024-07-27 01:14:04.286822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528850 ] 00:06:12.661 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.661 [2024-07-27 01:14:04.347651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.920 [2024-07-27 01:14:04.467785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.300 01:14:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:14.300 00:06:14.300 SPDK Configuration: 00:06:14.300 Core mask: 0x1 00:06:14.300 00:06:14.300 Accel Perf Configuration: 00:06:14.300 Workload Type: decompress 00:06:14.300 Transfer size: 111250 bytes 00:06:14.300 Vector count 1 00:06:14.300 Module: software 00:06:14.300 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.300 Queue depth: 32 00:06:14.300 Allocate depth: 32 00:06:14.300 # threads/core: 2 00:06:14.300 Run time: 1 seconds 00:06:14.300 Verify: Yes 00:06:14.300 00:06:14.300 Running for 1 seconds... 00:06:14.300 00:06:14.300 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.300 ------------------------------------------------------------------------------------ 00:06:14.300 0,1 1952/s 80 MiB/s 0 0 00:06:14.300 0,0 1920/s 79 MiB/s 0 0 00:06:14.300 ==================================================================================== 00:06:14.300 Total 3872/s 410 MiB/s 0 0' 00:06:14.300 01:14:05 -- accel/accel.sh@20 -- # IFS=: 00:06:14.300 01:14:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.300 01:14:05 -- accel/accel.sh@20 -- # read -r var val 00:06:14.300 01:14:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.300 01:14:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.300 01:14:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.300 01:14:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.300 01:14:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.300 01:14:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.300 01:14:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.300 01:14:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.300 01:14:05 -- accel/accel.sh@42 -- # jq -r . 00:06:14.300 [2024-07-27 01:14:05.810794] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:14.300 [2024-07-27 01:14:05.810887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528997 ] 00:06:14.300 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.300 [2024-07-27 01:14:05.872593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.300 [2024-07-27 01:14:05.992995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.558 01:14:06 -- accel/accel.sh@21 -- # val= 00:06:14.558 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.558 01:14:06 -- accel/accel.sh@21 -- # val= 00:06:14.558 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.558 01:14:06 -- accel/accel.sh@21 -- # val= 00:06:14.558 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.558 01:14:06 -- accel/accel.sh@21 -- # val=0x1 00:06:14.558 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.558 01:14:06 -- accel/accel.sh@21 -- # val= 00:06:14.558 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.558 01:14:06 -- accel/accel.sh@21 -- # val= 00:06:14.558 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.558 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val=decompress 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val= 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val=software 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val=32 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val=32 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val=2 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val=Yes 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val= 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:14.559 01:14:06 -- accel/accel.sh@21 -- # val= 00:06:14.559 01:14:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # IFS=: 00:06:14.559 01:14:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.931 01:14:07 -- accel/accel.sh@21 -- # val= 00:06:15.931 01:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:15.931 01:14:07 -- accel/accel.sh@21 -- # val= 00:06:15.931 01:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:15.931 01:14:07 -- accel/accel.sh@21 -- # val= 00:06:15.931 01:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:15.931 01:14:07 -- accel/accel.sh@21 -- # val= 00:06:15.931 01:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:15.931 01:14:07 -- accel/accel.sh@21 -- # val= 00:06:15.931 01:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:15.931 01:14:07 -- accel/accel.sh@21 -- # val= 00:06:15.931 01:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:15.931 01:14:07 -- accel/accel.sh@21 -- # val= 00:06:15.931 01:14:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # IFS=: 00:06:15.931 01:14:07 -- accel/accel.sh@20 -- # read -r var val 00:06:15.931 01:14:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.931 01:14:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:15.931 01:14:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.931 00:06:15.931 real 0m3.048s 00:06:15.931 user 0m2.743s 00:06:15.931 sys 0m0.299s 00:06:15.931 01:14:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.931 01:14:07 -- common/autotest_common.sh@10 -- # set +x 00:06:15.931 ************************************ 00:06:15.931 END TEST accel_deomp_full_mthread 00:06:15.931 ************************************ 00:06:15.931 01:14:07 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:15.931 01:14:07 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:15.931 01:14:07 -- accel/accel.sh@129 -- # build_accel_config 00:06:15.931 01:14:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.931 01:14:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:15.931 01:14:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.931 01:14:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.931 01:14:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.931 01:14:07 -- common/autotest_common.sh@10 -- # set +x 00:06:15.931 01:14:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.931 01:14:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.931 01:14:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.931 01:14:07 -- accel/accel.sh@42 -- # jq -r . 00:06:15.931 ************************************ 00:06:15.931 START TEST accel_dif_functional_tests 00:06:15.931 ************************************ 00:06:15.931 01:14:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:15.931 [2024-07-27 01:14:07.379564] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:15.931 [2024-07-27 01:14:07.379660] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529242 ] 00:06:15.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.931 [2024-07-27 01:14:07.446918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.931 [2024-07-27 01:14:07.568984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.931 [2024-07-27 01:14:07.569039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.931 [2024-07-27 01:14:07.569043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.931 00:06:15.931 00:06:15.931 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.931 http://cunit.sourceforge.net/ 00:06:15.931 00:06:15.931 00:06:15.931 Suite: accel_dif 00:06:15.931 Test: verify: DIF generated, GUARD check ...passed 00:06:15.931 Test: verify: DIF generated, APPTAG check ...passed 00:06:15.931 Test: verify: DIF generated, REFTAG check ...passed 00:06:15.931 Test: verify: DIF not generated, GUARD check ...[2024-07-27 01:14:07.668364] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:15.931 [2024-07-27 01:14:07.668437] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:15.931 passed 00:06:15.931 Test: verify: DIF not generated, APPTAG check ...[2024-07-27 01:14:07.668481] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:15.931 [2024-07-27 01:14:07.668511] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:15.931 passed 00:06:15.931 Test: verify: DIF not generated, REFTAG check ...[2024-07-27 01:14:07.668546] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:15.931 [2024-07-27 01:14:07.668573] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:15.931 passed 00:06:15.931 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:15.931 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-27 01:14:07.668653] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:15.931 passed 00:06:15.931 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:15.932 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:15.932 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:15.932 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-27 01:14:07.668810] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:15.932 passed 00:06:15.932 Test: generate copy: DIF generated, GUARD check ...passed 00:06:15.932 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:15.932 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:15.932 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:15.932 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:15.932 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:15.932 Test: generate copy: iovecs-len validate ...[2024-07-27 01:14:07.669079] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:15.932 passed 00:06:15.932 Test: generate copy: buffer alignment validate ...passed 00:06:15.932 00:06:15.932 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.932 suites 1 1 n/a 0 0 00:06:15.932 tests 20 20 20 0 0 00:06:15.932 asserts 204 204 204 0 n/a 00:06:15.932 00:06:15.932 Elapsed time = 0.003 seconds 00:06:16.191 00:06:16.191 real 0m0.581s 00:06:16.191 user 0m0.861s 00:06:16.191 sys 0m0.184s 00:06:16.191 01:14:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.191 01:14:07 -- common/autotest_common.sh@10 -- # set +x 00:06:16.191 ************************************ 00:06:16.191 END TEST accel_dif_functional_tests 00:06:16.191 ************************************ 00:06:16.191 00:06:16.191 real 1m3.078s 00:06:16.191 user 1m10.887s 00:06:16.191 sys 0m7.235s 00:06:16.191 01:14:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.191 01:14:07 -- common/autotest_common.sh@10 -- # set +x 00:06:16.191 ************************************ 00:06:16.191 END TEST accel 00:06:16.191 ************************************ 00:06:16.448 01:14:07 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:16.448 01:14:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.448 01:14:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.448 01:14:07 -- common/autotest_common.sh@10 -- # set +x 00:06:16.448 ************************************ 00:06:16.448 START TEST accel_rpc 00:06:16.448 ************************************ 00:06:16.448 01:14:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:16.448 * Looking for test storage... 00:06:16.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:16.448 01:14:08 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.448 01:14:08 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=529346 00:06:16.448 01:14:08 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:16.448 01:14:08 -- accel/accel_rpc.sh@15 -- # waitforlisten 529346 00:06:16.448 01:14:08 -- common/autotest_common.sh@819 -- # '[' -z 529346 ']' 00:06:16.448 01:14:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.448 01:14:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.448 01:14:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.449 01:14:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.449 01:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.449 [2024-07-27 01:14:08.074895] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:16.449 [2024-07-27 01:14:08.074993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529346 ] 00:06:16.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.449 [2024-07-27 01:14:08.141150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.706 [2024-07-27 01:14:08.259962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.706 [2024-07-27 01:14:08.260155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.706 01:14:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.706 01:14:08 -- common/autotest_common.sh@852 -- # return 0 00:06:16.706 01:14:08 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:16.706 01:14:08 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:16.706 01:14:08 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:16.706 01:14:08 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:16.706 01:14:08 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:16.706 01:14:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.706 01:14:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.706 01:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.706 ************************************ 00:06:16.706 START TEST accel_assign_opcode 00:06:16.706 ************************************ 00:06:16.706 01:14:08 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:16.706 01:14:08 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:16.706 01:14:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:16.706 01:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.706 [2024-07-27 01:14:08.316710] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:16.706 01:14:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.706 01:14:08 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:16.706 01:14:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:16.706 01:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.706 [2024-07-27 01:14:08.324727] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:16.706 01:14:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.706 01:14:08 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:16.706 01:14:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:16.706 01:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.964 01:14:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.964 01:14:08 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:16.964 01:14:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:16.964 01:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.964 01:14:08 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:16.964 01:14:08 -- accel/accel_rpc.sh@42 -- # grep software 00:06:16.964 01:14:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.964 software 00:06:16.964 00:06:16.964 real 0m0.305s 00:06:16.964 user 0m0.042s 00:06:16.965 sys 0m0.008s 00:06:16.965 01:14:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.965 01:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.965 ************************************ 00:06:16.965 END TEST accel_assign_opcode 00:06:16.965 ************************************ 00:06:16.965 01:14:08 -- accel/accel_rpc.sh@55 -- # killprocess 529346 00:06:16.965 01:14:08 -- common/autotest_common.sh@926 -- # '[' -z 529346 ']' 00:06:16.965 01:14:08 -- common/autotest_common.sh@930 -- # kill -0 529346 00:06:16.965 01:14:08 -- common/autotest_common.sh@931 -- # uname 00:06:16.965 01:14:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.965 01:14:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 529346 00:06:16.965 01:14:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:16.965 01:14:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:16.965 01:14:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 529346' 00:06:16.965 killing process with pid 529346 00:06:16.965 01:14:08 -- common/autotest_common.sh@945 -- # kill 529346 00:06:16.965 01:14:08 -- common/autotest_common.sh@950 -- # wait 529346 00:06:17.531 00:06:17.531 real 0m1.158s 00:06:17.531 user 0m1.125s 00:06:17.531 sys 0m0.406s 00:06:17.531 01:14:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.531 01:14:09 -- common/autotest_common.sh@10 -- # set +x 00:06:17.531 ************************************ 00:06:17.531 END TEST accel_rpc 00:06:17.531 ************************************ 00:06:17.531 01:14:09 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:17.531 01:14:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.531 01:14:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.531 01:14:09 -- common/autotest_common.sh@10 -- # set +x 00:06:17.531 ************************************ 00:06:17.531 START TEST app_cmdline 00:06:17.531 ************************************ 00:06:17.531 01:14:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:17.531 * Looking for test storage... 00:06:17.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:17.531 01:14:09 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:17.531 01:14:09 -- app/cmdline.sh@17 -- # spdk_tgt_pid=529549 00:06:17.531 01:14:09 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:17.531 01:14:09 -- app/cmdline.sh@18 -- # waitforlisten 529549 00:06:17.531 01:14:09 -- common/autotest_common.sh@819 -- # '[' -z 529549 ']' 00:06:17.531 01:14:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.531 01:14:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.531 01:14:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.531 01:14:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.531 01:14:09 -- common/autotest_common.sh@10 -- # set +x 00:06:17.531 [2024-07-27 01:14:09.262601] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:17.531 [2024-07-27 01:14:09.262691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529549 ] 00:06:17.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.789 [2024-07-27 01:14:09.324432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.789 [2024-07-27 01:14:09.442457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.789 [2024-07-27 01:14:09.442660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.724 01:14:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.724 01:14:10 -- common/autotest_common.sh@852 -- # return 0 00:06:18.724 01:14:10 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:18.724 { 00:06:18.724 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:06:18.724 "fields": { 00:06:18.724 "major": 24, 00:06:18.725 "minor": 1, 00:06:18.725 "patch": 1, 00:06:18.725 "suffix": "-pre", 00:06:18.725 "commit": "dbef7efac" 00:06:18.725 } 00:06:18.725 } 00:06:18.725 01:14:10 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:18.725 01:14:10 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:18.725 01:14:10 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:18.725 01:14:10 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:18.725 01:14:10 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:18.725 01:14:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:18.725 01:14:10 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:18.725 01:14:10 -- common/autotest_common.sh@10 -- # set +x 00:06:18.725 01:14:10 -- app/cmdline.sh@26 -- # sort 00:06:19.017 01:14:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.018 01:14:10 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:19.018 01:14:10 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:19.018 01:14:10 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.018 01:14:10 -- common/autotest_common.sh@640 -- # local es=0 00:06:19.018 01:14:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.018 01:14:10 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.018 01:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.018 01:14:10 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.018 01:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.018 01:14:10 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.018 01:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.018 01:14:10 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.018 01:14:10 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:19.018 01:14:10 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.284 request: 00:06:19.284 { 00:06:19.284 "method": "env_dpdk_get_mem_stats", 00:06:19.284 "req_id": 1 00:06:19.284 } 00:06:19.284 Got JSON-RPC error response 00:06:19.284 response: 00:06:19.284 { 00:06:19.284 "code": -32601, 00:06:19.284 "message": "Method not found" 00:06:19.284 } 00:06:19.284 01:14:10 -- common/autotest_common.sh@643 -- # es=1 00:06:19.284 01:14:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:19.284 01:14:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:19.284 01:14:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:19.284 01:14:10 -- app/cmdline.sh@1 -- # killprocess 529549 00:06:19.284 01:14:10 -- common/autotest_common.sh@926 -- # '[' -z 529549 ']' 00:06:19.284 01:14:10 -- common/autotest_common.sh@930 -- # kill -0 529549 00:06:19.284 01:14:10 -- common/autotest_common.sh@931 -- # uname 00:06:19.284 01:14:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:19.284 01:14:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 529549 00:06:19.284 01:14:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:19.284 01:14:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:19.284 01:14:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 529549' 00:06:19.284 killing process with pid 529549 00:06:19.284 01:14:10 -- common/autotest_common.sh@945 -- # kill 529549 00:06:19.284 01:14:10 -- common/autotest_common.sh@950 -- # wait 529549 00:06:19.851 00:06:19.851 real 0m2.142s 00:06:19.851 user 0m2.711s 00:06:19.851 sys 0m0.507s 00:06:19.851 01:14:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.851 01:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.851 ************************************ 00:06:19.851 END TEST app_cmdline 00:06:19.851 ************************************ 00:06:19.851 01:14:11 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.851 01:14:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.851 01:14:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.851 01:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.851 ************************************ 00:06:19.851 START TEST version 00:06:19.851 ************************************ 00:06:19.851 01:14:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.851 * Looking for test storage... 00:06:19.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:19.851 01:14:11 -- app/version.sh@17 -- # get_header_version major 00:06:19.851 01:14:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.851 01:14:11 -- app/version.sh@14 -- # cut -f2 00:06:19.851 01:14:11 -- app/version.sh@14 -- # tr -d '"' 00:06:19.851 01:14:11 -- app/version.sh@17 -- # major=24 00:06:19.851 01:14:11 -- app/version.sh@18 -- # get_header_version minor 00:06:19.851 01:14:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.851 01:14:11 -- app/version.sh@14 -- # cut -f2 00:06:19.851 01:14:11 -- app/version.sh@14 -- # tr -d '"' 00:06:19.851 01:14:11 -- app/version.sh@18 -- # minor=1 00:06:19.851 01:14:11 -- app/version.sh@19 -- # get_header_version patch 00:06:19.851 01:14:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.851 01:14:11 -- app/version.sh@14 -- # cut -f2 00:06:19.851 01:14:11 -- app/version.sh@14 -- # tr -d '"' 00:06:19.851 01:14:11 -- app/version.sh@19 -- # patch=1 00:06:19.851 01:14:11 -- app/version.sh@20 -- # get_header_version suffix 00:06:19.851 01:14:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.851 01:14:11 -- app/version.sh@14 -- # cut -f2 00:06:19.851 01:14:11 -- app/version.sh@14 -- # tr -d '"' 00:06:19.851 01:14:11 -- app/version.sh@20 -- # suffix=-pre 00:06:19.851 01:14:11 -- app/version.sh@22 -- # version=24.1 00:06:19.851 01:14:11 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:19.851 01:14:11 -- app/version.sh@25 -- # version=24.1.1 00:06:19.851 01:14:11 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:19.851 01:14:11 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:19.851 01:14:11 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:19.851 01:14:11 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:19.851 01:14:11 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:19.851 00:06:19.851 real 0m0.096s 00:06:19.851 user 0m0.052s 00:06:19.851 sys 0m0.065s 00:06:19.851 01:14:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.851 01:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.851 ************************************ 00:06:19.851 END TEST version 00:06:19.851 ************************************ 00:06:19.851 01:14:11 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:19.851 01:14:11 -- spdk/autotest.sh@204 -- # uname -s 00:06:19.851 01:14:11 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:19.851 01:14:11 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:19.851 01:14:11 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:19.851 01:14:11 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:19.851 01:14:11 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:19.851 01:14:11 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:19.851 01:14:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:19.851 01:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.851 01:14:11 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:19.851 01:14:11 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:19.851 01:14:11 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:19.851 01:14:11 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:19.851 01:14:11 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:19.851 01:14:11 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:19.851 01:14:11 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.851 01:14:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:19.851 01:14:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.851 01:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.851 ************************************ 00:06:19.851 START TEST nvmf_tcp 00:06:19.851 ************************************ 00:06:19.851 01:14:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.851 * Looking for test storage... 00:06:19.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:19.851 01:14:11 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:19.851 01:14:11 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:19.851 01:14:11 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.851 01:14:11 -- nvmf/common.sh@7 -- # uname -s 00:06:19.851 01:14:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.851 01:14:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.851 01:14:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.851 01:14:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.851 01:14:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.851 01:14:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.851 01:14:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.851 01:14:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.851 01:14:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.851 01:14:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.851 01:14:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.851 01:14:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.851 01:14:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.852 01:14:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.852 01:14:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.852 01:14:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.852 01:14:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.852 01:14:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.852 01:14:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.852 01:14:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.852 01:14:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.852 01:14:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.852 01:14:11 -- paths/export.sh@5 -- # export PATH 00:06:19.852 01:14:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.852 01:14:11 -- nvmf/common.sh@46 -- # : 0 00:06:19.852 01:14:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:19.852 01:14:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:19.852 01:14:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:19.852 01:14:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.852 01:14:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.852 01:14:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:19.852 01:14:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:19.852 01:14:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:19.852 01:14:11 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:19.852 01:14:11 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:19.852 01:14:11 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:19.852 01:14:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:19.852 01:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.852 01:14:11 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:19.852 01:14:11 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:19.852 01:14:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:19.852 01:14:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.852 01:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.852 ************************************ 00:06:19.852 START TEST nvmf_example 00:06:19.852 ************************************ 00:06:19.852 01:14:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:19.852 * Looking for test storage... 00:06:19.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.852 01:14:11 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.852 01:14:11 -- nvmf/common.sh@7 -- # uname -s 00:06:19.852 01:14:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.852 01:14:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.852 01:14:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.852 01:14:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.852 01:14:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.852 01:14:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.852 01:14:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.852 01:14:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.852 01:14:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.852 01:14:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.852 01:14:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.852 01:14:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.852 01:14:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.852 01:14:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.852 01:14:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.852 01:14:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.852 01:14:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.852 01:14:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.852 01:14:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.852 01:14:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.852 01:14:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.852 01:14:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.852 01:14:11 -- paths/export.sh@5 -- # export PATH 00:06:19.852 01:14:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.852 01:14:11 -- nvmf/common.sh@46 -- # : 0 00:06:19.852 01:14:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:19.852 01:14:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:19.852 01:14:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:19.852 01:14:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.852 01:14:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.852 01:14:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:19.852 01:14:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:19.852 01:14:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:19.852 01:14:11 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:19.852 01:14:11 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:19.852 01:14:11 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:19.852 01:14:11 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:19.852 01:14:11 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:19.852 01:14:11 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:19.852 01:14:11 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:19.852 01:14:11 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:19.852 01:14:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:19.852 01:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.852 01:14:11 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:19.852 01:14:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:19.852 01:14:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.852 01:14:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:19.852 01:14:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:19.852 01:14:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:19.852 01:14:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.852 01:14:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:19.852 01:14:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.111 01:14:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:20.111 01:14:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:20.111 01:14:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:20.111 01:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:22.013 01:14:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:22.013 01:14:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:22.013 01:14:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:22.013 01:14:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:22.013 01:14:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:22.013 01:14:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:22.013 01:14:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:22.013 01:14:13 -- nvmf/common.sh@294 -- # net_devs=() 00:06:22.013 01:14:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:22.013 01:14:13 -- nvmf/common.sh@295 -- # e810=() 00:06:22.013 01:14:13 -- nvmf/common.sh@295 -- # local -ga e810 00:06:22.013 01:14:13 -- nvmf/common.sh@296 -- # x722=() 00:06:22.013 01:14:13 -- nvmf/common.sh@296 -- # local -ga x722 00:06:22.013 01:14:13 -- nvmf/common.sh@297 -- # mlx=() 00:06:22.013 01:14:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:22.013 01:14:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.013 01:14:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:22.013 01:14:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:22.013 01:14:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:22.013 01:14:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:22.013 01:14:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:22.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:22.013 01:14:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:22.013 01:14:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:22.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:22.013 01:14:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:22.013 01:14:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:22.013 01:14:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.013 01:14:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:22.013 01:14:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.013 01:14:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:22.013 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:22.013 01:14:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.013 01:14:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:22.013 01:14:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.013 01:14:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:22.013 01:14:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.013 01:14:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:22.013 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:22.013 01:14:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.013 01:14:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:22.013 01:14:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:22.013 01:14:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:22.013 01:14:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:22.013 01:14:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.013 01:14:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.013 01:14:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.013 01:14:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:22.013 01:14:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.013 01:14:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.013 01:14:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:22.013 01:14:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.013 01:14:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.013 01:14:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:22.013 01:14:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:22.013 01:14:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.013 01:14:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.013 01:14:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.013 01:14:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.013 01:14:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:22.013 01:14:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.013 01:14:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.013 01:14:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.013 01:14:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:22.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:06:22.013 00:06:22.013 --- 10.0.0.2 ping statistics --- 00:06:22.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.013 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:06:22.013 01:14:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:06:22.014 00:06:22.014 --- 10.0.0.1 ping statistics --- 00:06:22.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.014 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:06:22.014 01:14:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.014 01:14:13 -- nvmf/common.sh@410 -- # return 0 00:06:22.014 01:14:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:22.014 01:14:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.014 01:14:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:22.014 01:14:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:22.014 01:14:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.014 01:14:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:22.014 01:14:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:22.014 01:14:13 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:22.014 01:14:13 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:22.014 01:14:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:22.014 01:14:13 -- common/autotest_common.sh@10 -- # set +x 00:06:22.014 01:14:13 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:22.014 01:14:13 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:22.014 01:14:13 -- target/nvmf_example.sh@34 -- # nvmfpid=531582 00:06:22.014 01:14:13 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:22.014 01:14:13 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:22.014 01:14:13 -- target/nvmf_example.sh@36 -- # waitforlisten 531582 00:06:22.014 01:14:13 -- common/autotest_common.sh@819 -- # '[' -z 531582 ']' 00:06:22.014 01:14:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.014 01:14:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.014 01:14:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.014 01:14:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.014 01:14:13 -- common/autotest_common.sh@10 -- # set +x 00:06:22.014 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.952 01:14:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.952 01:14:14 -- common/autotest_common.sh@852 -- # return 0 00:06:22.952 01:14:14 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:22.952 01:14:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:22.952 01:14:14 -- common/autotest_common.sh@10 -- # set +x 00:06:22.952 01:14:14 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:22.952 01:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:22.952 01:14:14 -- common/autotest_common.sh@10 -- # set +x 00:06:22.952 01:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:22.952 01:14:14 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:22.952 01:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:22.952 01:14:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.211 01:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.211 01:14:14 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:23.211 01:14:14 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:23.211 01:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.211 01:14:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.211 01:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.211 01:14:14 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:23.211 01:14:14 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:23.211 01:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.211 01:14:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.211 01:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.211 01:14:14 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.211 01:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.211 01:14:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.211 01:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.211 01:14:14 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:23.211 01:14:14 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:23.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.195 Initializing NVMe Controllers 00:06:33.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:33.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:33.195 Initialization complete. Launching workers. 00:06:33.195 ======================================================== 00:06:33.195 Latency(us) 00:06:33.195 Device Information : IOPS MiB/s Average min max 00:06:33.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13988.56 54.64 4575.39 863.69 15322.84 00:06:33.195 ======================================================== 00:06:33.195 Total : 13988.56 54.64 4575.39 863.69 15322.84 00:06:33.195 00:06:33.195 01:14:24 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:33.195 01:14:24 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:33.195 01:14:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:33.195 01:14:24 -- nvmf/common.sh@116 -- # sync 00:06:33.195 01:14:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:33.195 01:14:24 -- nvmf/common.sh@119 -- # set +e 00:06:33.195 01:14:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:33.195 01:14:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:33.195 rmmod nvme_tcp 00:06:33.195 rmmod nvme_fabrics 00:06:33.195 rmmod nvme_keyring 00:06:33.454 01:14:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:33.454 01:14:24 -- nvmf/common.sh@123 -- # set -e 00:06:33.454 01:14:24 -- nvmf/common.sh@124 -- # return 0 00:06:33.454 01:14:24 -- nvmf/common.sh@477 -- # '[' -n 531582 ']' 00:06:33.454 01:14:24 -- nvmf/common.sh@478 -- # killprocess 531582 00:06:33.454 01:14:24 -- common/autotest_common.sh@926 -- # '[' -z 531582 ']' 00:06:33.454 01:14:24 -- common/autotest_common.sh@930 -- # kill -0 531582 00:06:33.454 01:14:24 -- common/autotest_common.sh@931 -- # uname 00:06:33.454 01:14:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:33.454 01:14:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 531582 00:06:33.454 01:14:24 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:33.454 01:14:24 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:33.454 01:14:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 531582' 00:06:33.454 killing process with pid 531582 00:06:33.454 01:14:24 -- common/autotest_common.sh@945 -- # kill 531582 00:06:33.454 01:14:24 -- common/autotest_common.sh@950 -- # wait 531582 00:06:33.714 nvmf threads initialize successfully 00:06:33.714 bdev subsystem init successfully 00:06:33.714 created a nvmf target service 00:06:33.714 create targets's poll groups done 00:06:33.714 all subsystems of target started 00:06:33.714 nvmf target is running 00:06:33.714 all subsystems of target stopped 00:06:33.714 destroy targets's poll groups done 00:06:33.714 destroyed the nvmf target service 00:06:33.714 bdev subsystem finish successfully 00:06:33.714 nvmf threads destroy successfully 00:06:33.714 01:14:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:33.714 01:14:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:33.714 01:14:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:33.714 01:14:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:33.714 01:14:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:33.714 01:14:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.714 01:14:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:33.714 01:14:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.618 01:14:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:35.618 01:14:27 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:35.618 01:14:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:35.618 01:14:27 -- common/autotest_common.sh@10 -- # set +x 00:06:35.618 00:06:35.619 real 0m15.768s 00:06:35.619 user 0m39.939s 00:06:35.619 sys 0m4.745s 00:06:35.619 01:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.619 01:14:27 -- common/autotest_common.sh@10 -- # set +x 00:06:35.619 ************************************ 00:06:35.619 END TEST nvmf_example 00:06:35.619 ************************************ 00:06:35.619 01:14:27 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:35.619 01:14:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:35.619 01:14:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.619 01:14:27 -- common/autotest_common.sh@10 -- # set +x 00:06:35.619 ************************************ 00:06:35.619 START TEST nvmf_filesystem 00:06:35.619 ************************************ 00:06:35.619 01:14:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:35.879 * Looking for test storage... 00:06:35.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.879 01:14:27 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:35.879 01:14:27 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:35.879 01:14:27 -- common/autotest_common.sh@34 -- # set -e 00:06:35.879 01:14:27 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:35.879 01:14:27 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:35.879 01:14:27 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:35.879 01:14:27 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:35.879 01:14:27 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:35.879 01:14:27 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:35.879 01:14:27 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:35.879 01:14:27 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:35.879 01:14:27 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:35.879 01:14:27 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:35.879 01:14:27 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:35.879 01:14:27 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:35.879 01:14:27 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:35.879 01:14:27 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:35.879 01:14:27 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:35.879 01:14:27 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:35.879 01:14:27 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:35.879 01:14:27 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:35.879 01:14:27 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:35.879 01:14:27 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:35.879 01:14:27 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:35.879 01:14:27 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:35.879 01:14:27 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:35.879 01:14:27 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:35.879 01:14:27 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:35.879 01:14:27 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:35.879 01:14:27 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:35.879 01:14:27 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:35.879 01:14:27 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:35.879 01:14:27 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:35.879 01:14:27 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:35.879 01:14:27 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:35.879 01:14:27 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:35.879 01:14:27 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:35.879 01:14:27 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:35.879 01:14:27 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:35.879 01:14:27 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:35.879 01:14:27 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:35.879 01:14:27 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:35.879 01:14:27 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:35.879 01:14:27 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:35.879 01:14:27 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:35.879 01:14:27 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:35.879 01:14:27 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:35.879 01:14:27 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:35.879 01:14:27 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:35.879 01:14:27 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:35.879 01:14:27 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:35.879 01:14:27 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:35.879 01:14:27 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:35.879 01:14:27 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:35.879 01:14:27 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:35.879 01:14:27 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:35.879 01:14:27 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:35.879 01:14:27 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:35.879 01:14:27 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:35.880 01:14:27 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:35.880 01:14:27 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:35.880 01:14:27 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:35.880 01:14:27 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:35.880 01:14:27 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:35.880 01:14:27 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:35.880 01:14:27 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:35.880 01:14:27 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:35.880 01:14:27 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:35.880 01:14:27 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:35.880 01:14:27 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:35.880 01:14:27 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:35.880 01:14:27 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:35.880 01:14:27 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:35.880 01:14:27 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:35.880 01:14:27 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:35.880 01:14:27 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:35.880 01:14:27 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:35.880 01:14:27 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:35.880 01:14:27 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:35.880 01:14:27 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:35.880 01:14:27 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:35.880 01:14:27 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:35.880 01:14:27 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:35.880 01:14:27 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:35.880 01:14:27 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:35.880 01:14:27 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:35.880 01:14:27 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:35.880 01:14:27 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:35.880 01:14:27 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:35.880 01:14:27 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:35.880 01:14:27 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:35.880 01:14:27 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:35.880 01:14:27 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:35.880 01:14:27 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:35.880 01:14:27 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:35.880 01:14:27 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:35.880 01:14:27 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:35.880 01:14:27 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:35.880 01:14:27 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:35.880 01:14:27 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:35.880 01:14:27 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:35.880 01:14:27 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:35.880 #define SPDK_CONFIG_H 00:06:35.880 #define SPDK_CONFIG_APPS 1 00:06:35.880 #define SPDK_CONFIG_ARCH native 00:06:35.880 #undef SPDK_CONFIG_ASAN 00:06:35.880 #undef SPDK_CONFIG_AVAHI 00:06:35.880 #undef SPDK_CONFIG_CET 00:06:35.880 #define SPDK_CONFIG_COVERAGE 1 00:06:35.880 #define SPDK_CONFIG_CROSS_PREFIX 00:06:35.880 #undef SPDK_CONFIG_CRYPTO 00:06:35.880 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:35.880 #undef SPDK_CONFIG_CUSTOMOCF 00:06:35.880 #undef SPDK_CONFIG_DAOS 00:06:35.880 #define SPDK_CONFIG_DAOS_DIR 00:06:35.880 #define SPDK_CONFIG_DEBUG 1 00:06:35.880 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:35.880 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:35.880 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:35.880 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:35.880 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:35.880 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:35.880 #define SPDK_CONFIG_EXAMPLES 1 00:06:35.880 #undef SPDK_CONFIG_FC 00:06:35.880 #define SPDK_CONFIG_FC_PATH 00:06:35.880 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:35.880 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:35.880 #undef SPDK_CONFIG_FUSE 00:06:35.880 #undef SPDK_CONFIG_FUZZER 00:06:35.880 #define SPDK_CONFIG_FUZZER_LIB 00:06:35.880 #undef SPDK_CONFIG_GOLANG 00:06:35.880 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:35.880 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:35.880 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:35.880 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:35.880 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:35.880 #define SPDK_CONFIG_IDXD 1 00:06:35.880 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:35.880 #undef SPDK_CONFIG_IPSEC_MB 00:06:35.880 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:35.880 #define SPDK_CONFIG_ISAL 1 00:06:35.880 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:35.880 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:35.880 #define SPDK_CONFIG_LIBDIR 00:06:35.880 #undef SPDK_CONFIG_LTO 00:06:35.880 #define SPDK_CONFIG_MAX_LCORES 00:06:35.880 #define SPDK_CONFIG_NVME_CUSE 1 00:06:35.880 #undef SPDK_CONFIG_OCF 00:06:35.880 #define SPDK_CONFIG_OCF_PATH 00:06:35.880 #define SPDK_CONFIG_OPENSSL_PATH 00:06:35.880 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:35.880 #undef SPDK_CONFIG_PGO_USE 00:06:35.880 #define SPDK_CONFIG_PREFIX /usr/local 00:06:35.880 #undef SPDK_CONFIG_RAID5F 00:06:35.880 #undef SPDK_CONFIG_RBD 00:06:35.880 #define SPDK_CONFIG_RDMA 1 00:06:35.880 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:35.880 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:35.880 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:35.880 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:35.880 #define SPDK_CONFIG_SHARED 1 00:06:35.880 #undef SPDK_CONFIG_SMA 00:06:35.880 #define SPDK_CONFIG_TESTS 1 00:06:35.880 #undef SPDK_CONFIG_TSAN 00:06:35.880 #define SPDK_CONFIG_UBLK 1 00:06:35.880 #define SPDK_CONFIG_UBSAN 1 00:06:35.880 #undef SPDK_CONFIG_UNIT_TESTS 00:06:35.880 #undef SPDK_CONFIG_URING 00:06:35.880 #define SPDK_CONFIG_URING_PATH 00:06:35.880 #undef SPDK_CONFIG_URING_ZNS 00:06:35.880 #undef SPDK_CONFIG_USDT 00:06:35.880 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:35.880 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:35.880 #undef SPDK_CONFIG_VFIO_USER 00:06:35.880 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:35.880 #define SPDK_CONFIG_VHOST 1 00:06:35.880 #define SPDK_CONFIG_VIRTIO 1 00:06:35.880 #undef SPDK_CONFIG_VTUNE 00:06:35.880 #define SPDK_CONFIG_VTUNE_DIR 00:06:35.880 #define SPDK_CONFIG_WERROR 1 00:06:35.880 #define SPDK_CONFIG_WPDK_DIR 00:06:35.880 #undef SPDK_CONFIG_XNVME 00:06:35.880 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:35.880 01:14:27 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:35.880 01:14:27 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.880 01:14:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.880 01:14:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.880 01:14:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.880 01:14:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.880 01:14:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.880 01:14:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.880 01:14:27 -- paths/export.sh@5 -- # export PATH 00:06:35.880 01:14:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.880 01:14:27 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:35.880 01:14:27 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:35.880 01:14:27 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:35.880 01:14:27 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:35.880 01:14:27 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:35.880 01:14:27 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:35.880 01:14:27 -- pm/common@16 -- # TEST_TAG=N/A 00:06:35.880 01:14:27 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:35.880 01:14:27 -- common/autotest_common.sh@52 -- # : 1 00:06:35.880 01:14:27 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:35.880 01:14:27 -- common/autotest_common.sh@56 -- # : 0 00:06:35.880 01:14:27 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:35.880 01:14:27 -- common/autotest_common.sh@58 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:35.881 01:14:27 -- common/autotest_common.sh@60 -- # : 1 00:06:35.881 01:14:27 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:35.881 01:14:27 -- common/autotest_common.sh@62 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:35.881 01:14:27 -- common/autotest_common.sh@64 -- # : 00:06:35.881 01:14:27 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:35.881 01:14:27 -- common/autotest_common.sh@66 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:35.881 01:14:27 -- common/autotest_common.sh@68 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:35.881 01:14:27 -- common/autotest_common.sh@70 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:35.881 01:14:27 -- common/autotest_common.sh@72 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:35.881 01:14:27 -- common/autotest_common.sh@74 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:35.881 01:14:27 -- common/autotest_common.sh@76 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:35.881 01:14:27 -- common/autotest_common.sh@78 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:35.881 01:14:27 -- common/autotest_common.sh@80 -- # : 1 00:06:35.881 01:14:27 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:35.881 01:14:27 -- common/autotest_common.sh@82 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:35.881 01:14:27 -- common/autotest_common.sh@84 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:35.881 01:14:27 -- common/autotest_common.sh@86 -- # : 1 00:06:35.881 01:14:27 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:35.881 01:14:27 -- common/autotest_common.sh@88 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:35.881 01:14:27 -- common/autotest_common.sh@90 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:35.881 01:14:27 -- common/autotest_common.sh@92 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:35.881 01:14:27 -- common/autotest_common.sh@94 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:35.881 01:14:27 -- common/autotest_common.sh@96 -- # : tcp 00:06:35.881 01:14:27 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:35.881 01:14:27 -- common/autotest_common.sh@98 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:35.881 01:14:27 -- common/autotest_common.sh@100 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:35.881 01:14:27 -- common/autotest_common.sh@102 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:35.881 01:14:27 -- common/autotest_common.sh@104 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:35.881 01:14:27 -- common/autotest_common.sh@106 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:35.881 01:14:27 -- common/autotest_common.sh@108 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:35.881 01:14:27 -- common/autotest_common.sh@110 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:35.881 01:14:27 -- common/autotest_common.sh@112 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:35.881 01:14:27 -- common/autotest_common.sh@114 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:35.881 01:14:27 -- common/autotest_common.sh@116 -- # : 1 00:06:35.881 01:14:27 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:35.881 01:14:27 -- common/autotest_common.sh@118 -- # : 00:06:35.881 01:14:27 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:35.881 01:14:27 -- common/autotest_common.sh@120 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:35.881 01:14:27 -- common/autotest_common.sh@122 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:35.881 01:14:27 -- common/autotest_common.sh@124 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:35.881 01:14:27 -- common/autotest_common.sh@126 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:35.881 01:14:27 -- common/autotest_common.sh@128 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:35.881 01:14:27 -- common/autotest_common.sh@130 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:35.881 01:14:27 -- common/autotest_common.sh@132 -- # : 00:06:35.881 01:14:27 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:35.881 01:14:27 -- common/autotest_common.sh@134 -- # : true 00:06:35.881 01:14:27 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:35.881 01:14:27 -- common/autotest_common.sh@136 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:35.881 01:14:27 -- common/autotest_common.sh@138 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:35.881 01:14:27 -- common/autotest_common.sh@140 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:35.881 01:14:27 -- common/autotest_common.sh@142 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:35.881 01:14:27 -- common/autotest_common.sh@144 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:35.881 01:14:27 -- common/autotest_common.sh@146 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:35.881 01:14:27 -- common/autotest_common.sh@148 -- # : e810 00:06:35.881 01:14:27 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:35.881 01:14:27 -- common/autotest_common.sh@150 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:35.881 01:14:27 -- common/autotest_common.sh@152 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:35.881 01:14:27 -- common/autotest_common.sh@154 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:35.881 01:14:27 -- common/autotest_common.sh@156 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:35.881 01:14:27 -- common/autotest_common.sh@158 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:35.881 01:14:27 -- common/autotest_common.sh@160 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:35.881 01:14:27 -- common/autotest_common.sh@163 -- # : 00:06:35.881 01:14:27 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:35.881 01:14:27 -- common/autotest_common.sh@165 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:35.881 01:14:27 -- common/autotest_common.sh@167 -- # : 0 00:06:35.881 01:14:27 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:35.881 01:14:27 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:35.881 01:14:27 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:35.881 01:14:27 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:35.881 01:14:27 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:35.881 01:14:27 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:35.881 01:14:27 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:35.881 01:14:27 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:35.882 01:14:27 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:35.882 01:14:27 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:35.882 01:14:27 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:35.882 01:14:27 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:35.882 01:14:27 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:35.882 01:14:27 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:35.882 01:14:27 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:35.882 01:14:27 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:35.882 01:14:27 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:35.882 01:14:27 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:35.882 01:14:27 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:35.882 01:14:27 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:35.882 01:14:27 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:35.882 01:14:27 -- common/autotest_common.sh@196 -- # cat 00:06:35.882 01:14:27 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:35.882 01:14:27 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:35.882 01:14:27 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:35.882 01:14:27 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:35.882 01:14:27 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:35.882 01:14:27 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:35.882 01:14:27 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:35.882 01:14:27 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:35.882 01:14:27 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:35.882 01:14:27 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:35.882 01:14:27 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:35.882 01:14:27 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.882 01:14:27 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.882 01:14:27 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.882 01:14:27 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.882 01:14:27 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:35.882 01:14:27 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:35.882 01:14:27 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.882 01:14:27 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.882 01:14:27 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:35.882 01:14:27 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:35.882 01:14:27 -- common/autotest_common.sh@249 -- # valgrind= 00:06:35.882 01:14:27 -- common/autotest_common.sh@255 -- # uname -s 00:06:35.882 01:14:27 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:35.882 01:14:27 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:35.882 01:14:27 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:35.882 01:14:27 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:35.882 01:14:27 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:35.882 01:14:27 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:35.882 01:14:27 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:35.882 01:14:27 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:06:35.882 01:14:27 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:35.882 01:14:27 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:35.882 01:14:27 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:35.882 01:14:27 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:35.882 01:14:27 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:35.882 01:14:27 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:35.882 01:14:27 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:35.882 01:14:27 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:35.882 01:14:27 -- common/autotest_common.sh@309 -- # [[ -z 533334 ]] 00:06:35.882 01:14:27 -- common/autotest_common.sh@309 -- # kill -0 533334 00:06:35.882 01:14:27 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:35.882 01:14:27 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:35.882 01:14:27 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:35.882 01:14:27 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:35.882 01:14:27 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:35.882 01:14:27 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:35.882 01:14:27 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:35.882 01:14:27 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:35.882 01:14:27 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.ZqqIDE 00:06:35.882 01:14:27 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:35.882 01:14:27 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:35.882 01:14:27 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:35.882 01:14:27 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ZqqIDE/tests/target /tmp/spdk.ZqqIDE 00:06:35.882 01:14:27 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:35.882 01:14:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:35.882 01:14:27 -- common/autotest_common.sh@318 -- # df -T 00:06:35.882 01:14:27 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:06:35.882 01:14:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:35.882 01:14:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=919711744 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:06:35.882 01:14:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=4364718080 00:06:35.882 01:14:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=55595741184 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994713088 00:06:35.882 01:14:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=6398971904 00:06:35.882 01:14:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943838208 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997356544 00:06:35.882 01:14:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:06:35.882 01:14:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390182912 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398944256 00:06:35.882 01:14:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:06:35.882 01:14:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996193280 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997356544 00:06:35.882 01:14:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=1163264 00:06:35.882 01:14:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199463936 00:06:35.882 01:14:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199468032 00:06:35.882 01:14:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:35.882 01:14:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:35.882 01:14:27 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:35.883 * Looking for test storage... 00:06:35.883 01:14:27 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:35.883 01:14:27 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:35.883 01:14:27 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.883 01:14:27 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:35.883 01:14:27 -- common/autotest_common.sh@363 -- # mount=/ 00:06:35.883 01:14:27 -- common/autotest_common.sh@365 -- # target_space=55595741184 00:06:35.883 01:14:27 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:35.883 01:14:27 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:35.883 01:14:27 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:06:35.883 01:14:27 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:06:35.883 01:14:27 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:06:35.883 01:14:27 -- common/autotest_common.sh@372 -- # new_size=8613564416 00:06:35.883 01:14:27 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:35.883 01:14:27 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.883 01:14:27 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.883 01:14:27 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.883 01:14:27 -- common/autotest_common.sh@380 -- # return 0 00:06:35.883 01:14:27 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:35.883 01:14:27 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:35.883 01:14:27 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:35.883 01:14:27 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:35.883 01:14:27 -- common/autotest_common.sh@1672 -- # true 00:06:35.883 01:14:27 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:35.883 01:14:27 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:35.883 01:14:27 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:35.883 01:14:27 -- common/autotest_common.sh@27 -- # exec 00:06:35.883 01:14:27 -- common/autotest_common.sh@29 -- # exec 00:06:35.883 01:14:27 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:35.883 01:14:27 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:35.883 01:14:27 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:35.883 01:14:27 -- common/autotest_common.sh@18 -- # set -x 00:06:35.883 01:14:27 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.883 01:14:27 -- nvmf/common.sh@7 -- # uname -s 00:06:35.883 01:14:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.883 01:14:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.883 01:14:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.883 01:14:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.883 01:14:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.883 01:14:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.883 01:14:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.883 01:14:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.883 01:14:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.883 01:14:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.883 01:14:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:35.883 01:14:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:35.883 01:14:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.883 01:14:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.883 01:14:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.883 01:14:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.883 01:14:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.883 01:14:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.883 01:14:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.883 01:14:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.883 01:14:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.883 01:14:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.883 01:14:27 -- paths/export.sh@5 -- # export PATH 00:06:35.883 01:14:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.883 01:14:27 -- nvmf/common.sh@46 -- # : 0 00:06:35.883 01:14:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:35.883 01:14:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:35.883 01:14:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:35.883 01:14:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.883 01:14:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.883 01:14:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:35.883 01:14:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:35.883 01:14:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:35.883 01:14:27 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:35.883 01:14:27 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:35.883 01:14:27 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:35.883 01:14:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:35.883 01:14:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.883 01:14:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:35.883 01:14:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:35.883 01:14:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:35.883 01:14:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.883 01:14:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:35.883 01:14:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.883 01:14:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:35.883 01:14:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:35.883 01:14:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:35.883 01:14:27 -- common/autotest_common.sh@10 -- # set +x 00:06:37.788 01:14:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:37.788 01:14:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:37.788 01:14:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:37.788 01:14:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:37.788 01:14:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:37.788 01:14:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:37.788 01:14:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:37.788 01:14:29 -- nvmf/common.sh@294 -- # net_devs=() 00:06:37.788 01:14:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:37.788 01:14:29 -- nvmf/common.sh@295 -- # e810=() 00:06:37.788 01:14:29 -- nvmf/common.sh@295 -- # local -ga e810 00:06:37.788 01:14:29 -- nvmf/common.sh@296 -- # x722=() 00:06:37.788 01:14:29 -- nvmf/common.sh@296 -- # local -ga x722 00:06:37.788 01:14:29 -- nvmf/common.sh@297 -- # mlx=() 00:06:37.788 01:14:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:37.788 01:14:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.788 01:14:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:37.788 01:14:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:37.788 01:14:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:37.788 01:14:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:37.788 01:14:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:37.788 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:37.788 01:14:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:37.788 01:14:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:37.788 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:37.788 01:14:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:37.788 01:14:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:37.788 01:14:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.788 01:14:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:37.788 01:14:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.788 01:14:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:37.788 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:37.788 01:14:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.788 01:14:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:37.788 01:14:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.788 01:14:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:37.788 01:14:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.788 01:14:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:37.788 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:37.788 01:14:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.788 01:14:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:37.788 01:14:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:37.788 01:14:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:37.788 01:14:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.788 01:14:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.788 01:14:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.788 01:14:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:37.788 01:14:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.788 01:14:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.788 01:14:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:37.788 01:14:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.788 01:14:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.788 01:14:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:37.788 01:14:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:37.788 01:14:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.788 01:14:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.788 01:14:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.788 01:14:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.788 01:14:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:37.788 01:14:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.788 01:14:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.788 01:14:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.788 01:14:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:37.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:06:37.788 00:06:37.788 --- 10.0.0.2 ping statistics --- 00:06:37.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.788 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:06:37.788 01:14:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:06:37.788 00:06:37.788 --- 10.0.0.1 ping statistics --- 00:06:37.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.788 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:06:37.788 01:14:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.788 01:14:29 -- nvmf/common.sh@410 -- # return 0 00:06:37.788 01:14:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:37.788 01:14:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.788 01:14:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:37.788 01:14:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.788 01:14:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:37.788 01:14:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:38.048 01:14:29 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:38.048 01:14:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:38.048 01:14:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.048 01:14:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.048 ************************************ 00:06:38.048 START TEST nvmf_filesystem_no_in_capsule 00:06:38.048 ************************************ 00:06:38.048 01:14:29 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:06:38.048 01:14:29 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:38.048 01:14:29 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:38.048 01:14:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:38.048 01:14:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:38.048 01:14:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.048 01:14:29 -- nvmf/common.sh@469 -- # nvmfpid=534961 00:06:38.048 01:14:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:38.048 01:14:29 -- nvmf/common.sh@470 -- # waitforlisten 534961 00:06:38.048 01:14:29 -- common/autotest_common.sh@819 -- # '[' -z 534961 ']' 00:06:38.048 01:14:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.048 01:14:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.048 01:14:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.048 01:14:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.048 01:14:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.048 [2024-07-27 01:14:29.613491] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:38.048 [2024-07-27 01:14:29.613578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.048 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.048 [2024-07-27 01:14:29.683342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.048 [2024-07-27 01:14:29.803381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.048 [2024-07-27 01:14:29.803570] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.048 [2024-07-27 01:14:29.803590] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.048 [2024-07-27 01:14:29.803605] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.048 [2024-07-27 01:14:29.803707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.048 [2024-07-27 01:14:29.803766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.048 [2024-07-27 01:14:29.803821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.048 [2024-07-27 01:14:29.803824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.984 01:14:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.984 01:14:30 -- common/autotest_common.sh@852 -- # return 0 00:06:38.984 01:14:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:38.985 01:14:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:38.985 01:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:38.985 01:14:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.985 01:14:30 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:38.985 01:14:30 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:38.985 01:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.985 01:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:38.985 [2024-07-27 01:14:30.569566] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.985 01:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.985 01:14:30 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:38.985 01:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.985 01:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:38.985 Malloc1 00:06:38.985 01:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.985 01:14:30 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:38.985 01:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.985 01:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:38.985 01:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.985 01:14:30 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:38.985 01:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.985 01:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.244 01:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.244 01:14:30 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:39.244 01:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.244 01:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.244 [2024-07-27 01:14:30.748720] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.244 01:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.244 01:14:30 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:39.244 01:14:30 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:39.244 01:14:30 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:39.244 01:14:30 -- common/autotest_common.sh@1359 -- # local bs 00:06:39.244 01:14:30 -- common/autotest_common.sh@1360 -- # local nb 00:06:39.244 01:14:30 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:39.244 01:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.244 01:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.244 01:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.244 01:14:30 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:39.244 { 00:06:39.244 "name": "Malloc1", 00:06:39.244 "aliases": [ 00:06:39.244 "d6c18d99-8fd2-4add-9a1e-c196e2fa08e1" 00:06:39.244 ], 00:06:39.244 "product_name": "Malloc disk", 00:06:39.244 "block_size": 512, 00:06:39.244 "num_blocks": 1048576, 00:06:39.244 "uuid": "d6c18d99-8fd2-4add-9a1e-c196e2fa08e1", 00:06:39.244 "assigned_rate_limits": { 00:06:39.244 "rw_ios_per_sec": 0, 00:06:39.244 "rw_mbytes_per_sec": 0, 00:06:39.244 "r_mbytes_per_sec": 0, 00:06:39.244 "w_mbytes_per_sec": 0 00:06:39.244 }, 00:06:39.244 "claimed": true, 00:06:39.244 "claim_type": "exclusive_write", 00:06:39.244 "zoned": false, 00:06:39.244 "supported_io_types": { 00:06:39.244 "read": true, 00:06:39.244 "write": true, 00:06:39.244 "unmap": true, 00:06:39.244 "write_zeroes": true, 00:06:39.244 "flush": true, 00:06:39.244 "reset": true, 00:06:39.244 "compare": false, 00:06:39.244 "compare_and_write": false, 00:06:39.244 "abort": true, 00:06:39.244 "nvme_admin": false, 00:06:39.244 "nvme_io": false 00:06:39.244 }, 00:06:39.244 "memory_domains": [ 00:06:39.244 { 00:06:39.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.244 "dma_device_type": 2 00:06:39.244 } 00:06:39.244 ], 00:06:39.244 "driver_specific": {} 00:06:39.244 } 00:06:39.244 ]' 00:06:39.244 01:14:30 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:39.244 01:14:30 -- common/autotest_common.sh@1362 -- # bs=512 00:06:39.244 01:14:30 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:39.244 01:14:30 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:39.244 01:14:30 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:39.244 01:14:30 -- common/autotest_common.sh@1367 -- # echo 512 00:06:39.244 01:14:30 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:39.244 01:14:30 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:39.812 01:14:31 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:39.813 01:14:31 -- common/autotest_common.sh@1177 -- # local i=0 00:06:39.813 01:14:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:39.813 01:14:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:39.813 01:14:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:41.717 01:14:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:41.718 01:14:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:41.718 01:14:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:41.718 01:14:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:41.718 01:14:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:41.718 01:14:33 -- common/autotest_common.sh@1187 -- # return 0 00:06:41.718 01:14:33 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:41.718 01:14:33 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:41.975 01:14:33 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:41.975 01:14:33 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:41.975 01:14:33 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:41.975 01:14:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:41.975 01:14:33 -- setup/common.sh@80 -- # echo 536870912 00:06:41.975 01:14:33 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:41.975 01:14:33 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:41.975 01:14:33 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:41.975 01:14:33 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:41.975 01:14:33 -- target/filesystem.sh@69 -- # partprobe 00:06:43.377 01:14:34 -- target/filesystem.sh@70 -- # sleep 1 00:06:43.942 01:14:35 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:43.942 01:14:35 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:43.942 01:14:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:43.942 01:14:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.942 01:14:35 -- common/autotest_common.sh@10 -- # set +x 00:06:43.942 ************************************ 00:06:43.942 START TEST filesystem_ext4 00:06:43.942 ************************************ 00:06:43.942 01:14:35 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:43.942 01:14:35 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:43.942 01:14:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:43.942 01:14:35 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:43.942 01:14:35 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:43.942 01:14:35 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:43.942 01:14:35 -- common/autotest_common.sh@904 -- # local i=0 00:06:43.942 01:14:35 -- common/autotest_common.sh@905 -- # local force 00:06:43.942 01:14:35 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:43.942 01:14:35 -- common/autotest_common.sh@908 -- # force=-F 00:06:43.942 01:14:35 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:43.942 mke2fs 1.46.5 (30-Dec-2021) 00:06:44.201 Discarding device blocks: 0/522240 done 00:06:44.201 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:44.201 Filesystem UUID: f3199cb1-8b2a-4cc2-8306-29b8bb08d892 00:06:44.201 Superblock backups stored on blocks: 00:06:44.201 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:44.201 00:06:44.201 Allocating group tables: 0/64 done 00:06:44.201 Writing inode tables: 0/64 done 00:06:45.578 Creating journal (8192 blocks): done 00:06:45.578 Writing superblocks and filesystem accounting information: 0/64 done 00:06:45.578 00:06:45.578 01:14:36 -- common/autotest_common.sh@921 -- # return 0 00:06:45.578 01:14:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:46.143 01:14:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:46.143 01:14:37 -- target/filesystem.sh@25 -- # sync 00:06:46.143 01:14:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:46.144 01:14:37 -- target/filesystem.sh@27 -- # sync 00:06:46.144 01:14:37 -- target/filesystem.sh@29 -- # i=0 00:06:46.144 01:14:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:46.144 01:14:37 -- target/filesystem.sh@37 -- # kill -0 534961 00:06:46.144 01:14:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:46.144 01:14:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:46.144 01:14:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:46.144 01:14:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:46.144 00:06:46.144 real 0m2.185s 00:06:46.144 user 0m0.026s 00:06:46.144 sys 0m0.051s 00:06:46.144 01:14:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.144 01:14:37 -- common/autotest_common.sh@10 -- # set +x 00:06:46.144 ************************************ 00:06:46.144 END TEST filesystem_ext4 00:06:46.144 ************************************ 00:06:46.403 01:14:37 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:46.403 01:14:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:46.403 01:14:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.403 01:14:37 -- common/autotest_common.sh@10 -- # set +x 00:06:46.403 ************************************ 00:06:46.403 START TEST filesystem_btrfs 00:06:46.403 ************************************ 00:06:46.403 01:14:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:46.403 01:14:37 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:46.403 01:14:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:46.403 01:14:37 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:46.403 01:14:37 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:46.403 01:14:37 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:46.403 01:14:37 -- common/autotest_common.sh@904 -- # local i=0 00:06:46.403 01:14:37 -- common/autotest_common.sh@905 -- # local force 00:06:46.403 01:14:37 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:46.403 01:14:37 -- common/autotest_common.sh@910 -- # force=-f 00:06:46.403 01:14:37 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:46.403 btrfs-progs v6.6.2 00:06:46.403 See https://btrfs.readthedocs.io for more information. 00:06:46.403 00:06:46.403 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:46.403 NOTE: several default settings have changed in version 5.15, please make sure 00:06:46.403 this does not affect your deployments: 00:06:46.403 - DUP for metadata (-m dup) 00:06:46.403 - enabled no-holes (-O no-holes) 00:06:46.403 - enabled free-space-tree (-R free-space-tree) 00:06:46.403 00:06:46.403 Label: (null) 00:06:46.403 UUID: 73306840-c65e-4830-95f3-560a581cdd34 00:06:46.403 Node size: 16384 00:06:46.403 Sector size: 4096 00:06:46.403 Filesystem size: 510.00MiB 00:06:46.403 Block group profiles: 00:06:46.403 Data: single 8.00MiB 00:06:46.403 Metadata: DUP 32.00MiB 00:06:46.403 System: DUP 8.00MiB 00:06:46.403 SSD detected: yes 00:06:46.403 Zoned device: no 00:06:46.403 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:46.403 Runtime features: free-space-tree 00:06:46.403 Checksum: crc32c 00:06:46.403 Number of devices: 1 00:06:46.403 Devices: 00:06:46.403 ID SIZE PATH 00:06:46.403 1 510.00MiB /dev/nvme0n1p1 00:06:46.403 00:06:46.403 01:14:38 -- common/autotest_common.sh@921 -- # return 0 00:06:46.403 01:14:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:47.338 01:14:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:47.338 01:14:39 -- target/filesystem.sh@25 -- # sync 00:06:47.338 01:14:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:47.338 01:14:39 -- target/filesystem.sh@27 -- # sync 00:06:47.338 01:14:39 -- target/filesystem.sh@29 -- # i=0 00:06:47.338 01:14:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:47.338 01:14:39 -- target/filesystem.sh@37 -- # kill -0 534961 00:06:47.338 01:14:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:47.338 01:14:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:47.338 01:14:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:47.338 01:14:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:47.338 00:06:47.338 real 0m1.146s 00:06:47.338 user 0m0.019s 00:06:47.338 sys 0m0.111s 00:06:47.338 01:14:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.338 01:14:39 -- common/autotest_common.sh@10 -- # set +x 00:06:47.338 ************************************ 00:06:47.338 END TEST filesystem_btrfs 00:06:47.338 ************************************ 00:06:47.338 01:14:39 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:47.338 01:14:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:47.338 01:14:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.338 01:14:39 -- common/autotest_common.sh@10 -- # set +x 00:06:47.338 ************************************ 00:06:47.338 START TEST filesystem_xfs 00:06:47.338 ************************************ 00:06:47.338 01:14:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:47.338 01:14:39 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:47.338 01:14:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:47.338 01:14:39 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:47.338 01:14:39 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:47.338 01:14:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:47.338 01:14:39 -- common/autotest_common.sh@904 -- # local i=0 00:06:47.338 01:14:39 -- common/autotest_common.sh@905 -- # local force 00:06:47.338 01:14:39 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:47.338 01:14:39 -- common/autotest_common.sh@910 -- # force=-f 00:06:47.339 01:14:39 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:47.597 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:47.597 = sectsz=512 attr=2, projid32bit=1 00:06:47.597 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:47.597 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:47.597 data = bsize=4096 blocks=130560, imaxpct=25 00:06:47.597 = sunit=0 swidth=0 blks 00:06:47.597 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:47.597 log =internal log bsize=4096 blocks=16384, version=2 00:06:47.597 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:47.597 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:48.534 Discarding blocks...Done. 00:06:48.534 01:14:39 -- common/autotest_common.sh@921 -- # return 0 00:06:48.534 01:14:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.075 01:14:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.075 01:14:42 -- target/filesystem.sh@25 -- # sync 00:06:51.075 01:14:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.075 01:14:42 -- target/filesystem.sh@27 -- # sync 00:06:51.075 01:14:42 -- target/filesystem.sh@29 -- # i=0 00:06:51.075 01:14:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.075 01:14:42 -- target/filesystem.sh@37 -- # kill -0 534961 00:06:51.075 01:14:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.075 01:14:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.075 01:14:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.075 01:14:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.075 00:06:51.075 real 0m3.330s 00:06:51.075 user 0m0.020s 00:06:51.075 sys 0m0.065s 00:06:51.075 01:14:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.075 01:14:42 -- common/autotest_common.sh@10 -- # set +x 00:06:51.075 ************************************ 00:06:51.075 END TEST filesystem_xfs 00:06:51.075 ************************************ 00:06:51.075 01:14:42 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:51.075 01:14:42 -- target/filesystem.sh@93 -- # sync 00:06:51.075 01:14:42 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:51.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:51.335 01:14:42 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:51.335 01:14:42 -- common/autotest_common.sh@1198 -- # local i=0 00:06:51.335 01:14:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:51.335 01:14:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:51.335 01:14:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:51.335 01:14:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:51.335 01:14:42 -- common/autotest_common.sh@1210 -- # return 0 00:06:51.335 01:14:42 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:51.335 01:14:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.335 01:14:42 -- common/autotest_common.sh@10 -- # set +x 00:06:51.335 01:14:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.335 01:14:42 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:51.335 01:14:42 -- target/filesystem.sh@101 -- # killprocess 534961 00:06:51.335 01:14:42 -- common/autotest_common.sh@926 -- # '[' -z 534961 ']' 00:06:51.335 01:14:42 -- common/autotest_common.sh@930 -- # kill -0 534961 00:06:51.335 01:14:42 -- common/autotest_common.sh@931 -- # uname 00:06:51.335 01:14:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:51.335 01:14:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 534961 00:06:51.335 01:14:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:51.335 01:14:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:51.335 01:14:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 534961' 00:06:51.335 killing process with pid 534961 00:06:51.335 01:14:42 -- common/autotest_common.sh@945 -- # kill 534961 00:06:51.335 01:14:42 -- common/autotest_common.sh@950 -- # wait 534961 00:06:51.903 01:14:43 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:51.903 00:06:51.903 real 0m13.852s 00:06:51.903 user 0m53.317s 00:06:51.903 sys 0m1.932s 00:06:51.903 01:14:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.903 01:14:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.903 ************************************ 00:06:51.903 END TEST nvmf_filesystem_no_in_capsule 00:06:51.903 ************************************ 00:06:51.903 01:14:43 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:51.903 01:14:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:51.903 01:14:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.903 01:14:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.903 ************************************ 00:06:51.903 START TEST nvmf_filesystem_in_capsule 00:06:51.903 ************************************ 00:06:51.903 01:14:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:06:51.903 01:14:43 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:51.903 01:14:43 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:51.903 01:14:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:51.903 01:14:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:51.903 01:14:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.903 01:14:43 -- nvmf/common.sh@469 -- # nvmfpid=536836 00:06:51.903 01:14:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.903 01:14:43 -- nvmf/common.sh@470 -- # waitforlisten 536836 00:06:51.903 01:14:43 -- common/autotest_common.sh@819 -- # '[' -z 536836 ']' 00:06:51.903 01:14:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.903 01:14:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.903 01:14:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.903 01:14:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.903 01:14:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.903 [2024-07-27 01:14:43.491240] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:51.903 [2024-07-27 01:14:43.491320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.903 [2024-07-27 01:14:43.560974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.162 [2024-07-27 01:14:43.683076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:52.162 [2024-07-27 01:14:43.683234] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.162 [2024-07-27 01:14:43.683254] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.162 [2024-07-27 01:14:43.683269] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.162 [2024-07-27 01:14:43.683348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.162 [2024-07-27 01:14:43.683386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.162 [2024-07-27 01:14:43.683437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.162 [2024-07-27 01:14:43.683440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.099 01:14:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:53.099 01:14:44 -- common/autotest_common.sh@852 -- # return 0 00:06:53.099 01:14:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:53.099 01:14:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:53.099 01:14:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.099 01:14:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.099 01:14:44 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:53.099 01:14:44 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:53.099 01:14:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.099 01:14:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.099 [2024-07-27 01:14:44.512756] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.099 01:14:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.099 01:14:44 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:53.099 01:14:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.099 01:14:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.099 Malloc1 00:06:53.099 01:14:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.099 01:14:44 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:53.099 01:14:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.099 01:14:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.099 01:14:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.099 01:14:44 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:53.099 01:14:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.099 01:14:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.099 01:14:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.099 01:14:44 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.099 01:14:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.099 01:14:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.099 [2024-07-27 01:14:44.685050] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.099 01:14:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.099 01:14:44 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:53.099 01:14:44 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:53.099 01:14:44 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:53.099 01:14:44 -- common/autotest_common.sh@1359 -- # local bs 00:06:53.099 01:14:44 -- common/autotest_common.sh@1360 -- # local nb 00:06:53.099 01:14:44 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:53.099 01:14:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.099 01:14:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.099 01:14:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.099 01:14:44 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:53.099 { 00:06:53.099 "name": "Malloc1", 00:06:53.099 "aliases": [ 00:06:53.099 "470ece00-793c-40a2-a8d1-f77cf8ddc8d1" 00:06:53.099 ], 00:06:53.099 "product_name": "Malloc disk", 00:06:53.099 "block_size": 512, 00:06:53.099 "num_blocks": 1048576, 00:06:53.099 "uuid": "470ece00-793c-40a2-a8d1-f77cf8ddc8d1", 00:06:53.099 "assigned_rate_limits": { 00:06:53.099 "rw_ios_per_sec": 0, 00:06:53.099 "rw_mbytes_per_sec": 0, 00:06:53.099 "r_mbytes_per_sec": 0, 00:06:53.099 "w_mbytes_per_sec": 0 00:06:53.099 }, 00:06:53.099 "claimed": true, 00:06:53.099 "claim_type": "exclusive_write", 00:06:53.099 "zoned": false, 00:06:53.099 "supported_io_types": { 00:06:53.099 "read": true, 00:06:53.099 "write": true, 00:06:53.099 "unmap": true, 00:06:53.099 "write_zeroes": true, 00:06:53.099 "flush": true, 00:06:53.099 "reset": true, 00:06:53.099 "compare": false, 00:06:53.099 "compare_and_write": false, 00:06:53.099 "abort": true, 00:06:53.099 "nvme_admin": false, 00:06:53.099 "nvme_io": false 00:06:53.099 }, 00:06:53.099 "memory_domains": [ 00:06:53.099 { 00:06:53.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.099 "dma_device_type": 2 00:06:53.099 } 00:06:53.099 ], 00:06:53.099 "driver_specific": {} 00:06:53.099 } 00:06:53.099 ]' 00:06:53.099 01:14:44 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:53.099 01:14:44 -- common/autotest_common.sh@1362 -- # bs=512 00:06:53.099 01:14:44 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:53.099 01:14:44 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:53.099 01:14:44 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:53.099 01:14:44 -- common/autotest_common.sh@1367 -- # echo 512 00:06:53.099 01:14:44 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:53.099 01:14:44 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:54.036 01:14:45 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:54.036 01:14:45 -- common/autotest_common.sh@1177 -- # local i=0 00:06:54.036 01:14:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:54.036 01:14:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:54.036 01:14:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:55.941 01:14:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:55.941 01:14:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:55.941 01:14:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:55.941 01:14:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:55.941 01:14:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:55.941 01:14:47 -- common/autotest_common.sh@1187 -- # return 0 00:06:55.941 01:14:47 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:55.941 01:14:47 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:55.941 01:14:47 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:55.941 01:14:47 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:55.941 01:14:47 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:55.941 01:14:47 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:55.941 01:14:47 -- setup/common.sh@80 -- # echo 536870912 00:06:55.941 01:14:47 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:55.941 01:14:47 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:55.941 01:14:47 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:55.941 01:14:47 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:56.200 01:14:47 -- target/filesystem.sh@69 -- # partprobe 00:06:56.769 01:14:48 -- target/filesystem.sh@70 -- # sleep 1 00:06:57.705 01:14:49 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:57.705 01:14:49 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:57.705 01:14:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:57.705 01:14:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.705 01:14:49 -- common/autotest_common.sh@10 -- # set +x 00:06:57.705 ************************************ 00:06:57.705 START TEST filesystem_in_capsule_ext4 00:06:57.705 ************************************ 00:06:57.705 01:14:49 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:57.705 01:14:49 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:57.705 01:14:49 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:57.705 01:14:49 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:57.705 01:14:49 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:57.705 01:14:49 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:57.705 01:14:49 -- common/autotest_common.sh@904 -- # local i=0 00:06:57.705 01:14:49 -- common/autotest_common.sh@905 -- # local force 00:06:57.705 01:14:49 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:57.705 01:14:49 -- common/autotest_common.sh@908 -- # force=-F 00:06:57.705 01:14:49 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:57.705 mke2fs 1.46.5 (30-Dec-2021) 00:06:57.705 Discarding device blocks: 0/522240 done 00:06:57.705 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:57.705 Filesystem UUID: 61b8a77a-b7ac-41ad-97ae-314e5342dc41 00:06:57.705 Superblock backups stored on blocks: 00:06:57.705 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:57.705 00:06:57.705 Allocating group tables: 0/64 done 00:06:57.705 Writing inode tables: 0/64 done 00:06:58.645 Creating journal (8192 blocks): done 00:06:58.645 Writing superblocks and filesystem accounting information: 0/64 done 00:06:58.645 00:06:58.645 01:14:50 -- common/autotest_common.sh@921 -- # return 0 00:06:58.645 01:14:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:58.904 01:14:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:58.904 01:14:50 -- target/filesystem.sh@25 -- # sync 00:06:58.904 01:14:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:58.904 01:14:50 -- target/filesystem.sh@27 -- # sync 00:06:58.904 01:14:50 -- target/filesystem.sh@29 -- # i=0 00:06:58.904 01:14:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:58.904 01:14:50 -- target/filesystem.sh@37 -- # kill -0 536836 00:06:58.904 01:14:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:58.904 01:14:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:58.904 01:14:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:58.904 01:14:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:58.904 00:06:58.904 real 0m1.299s 00:06:58.904 user 0m0.016s 00:06:58.904 sys 0m0.053s 00:06:58.904 01:14:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.904 01:14:50 -- common/autotest_common.sh@10 -- # set +x 00:06:58.904 ************************************ 00:06:58.904 END TEST filesystem_in_capsule_ext4 00:06:58.904 ************************************ 00:06:58.904 01:14:50 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:58.904 01:14:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:58.904 01:14:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.904 01:14:50 -- common/autotest_common.sh@10 -- # set +x 00:06:58.904 ************************************ 00:06:58.904 START TEST filesystem_in_capsule_btrfs 00:06:58.904 ************************************ 00:06:58.904 01:14:50 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:58.904 01:14:50 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:58.904 01:14:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:58.904 01:14:50 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:58.904 01:14:50 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:58.905 01:14:50 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:58.905 01:14:50 -- common/autotest_common.sh@904 -- # local i=0 00:06:58.905 01:14:50 -- common/autotest_common.sh@905 -- # local force 00:06:58.905 01:14:50 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:58.905 01:14:50 -- common/autotest_common.sh@910 -- # force=-f 00:06:58.905 01:14:50 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:59.164 btrfs-progs v6.6.2 00:06:59.164 See https://btrfs.readthedocs.io for more information. 00:06:59.164 00:06:59.164 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:59.165 NOTE: several default settings have changed in version 5.15, please make sure 00:06:59.165 this does not affect your deployments: 00:06:59.165 - DUP for metadata (-m dup) 00:06:59.165 - enabled no-holes (-O no-holes) 00:06:59.165 - enabled free-space-tree (-R free-space-tree) 00:06:59.165 00:06:59.165 Label: (null) 00:06:59.165 UUID: ca1c5973-07b9-47ef-8b13-c9f34c5a05b6 00:06:59.165 Node size: 16384 00:06:59.165 Sector size: 4096 00:06:59.165 Filesystem size: 510.00MiB 00:06:59.165 Block group profiles: 00:06:59.165 Data: single 8.00MiB 00:06:59.165 Metadata: DUP 32.00MiB 00:06:59.165 System: DUP 8.00MiB 00:06:59.165 SSD detected: yes 00:06:59.165 Zoned device: no 00:06:59.165 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:59.165 Runtime features: free-space-tree 00:06:59.165 Checksum: crc32c 00:06:59.165 Number of devices: 1 00:06:59.165 Devices: 00:06:59.165 ID SIZE PATH 00:06:59.165 1 510.00MiB /dev/nvme0n1p1 00:06:59.165 00:06:59.165 01:14:50 -- common/autotest_common.sh@921 -- # return 0 00:06:59.165 01:14:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.104 01:14:51 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.104 01:14:51 -- target/filesystem.sh@25 -- # sync 00:07:00.104 01:14:51 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.104 01:14:51 -- target/filesystem.sh@27 -- # sync 00:07:00.104 01:14:51 -- target/filesystem.sh@29 -- # i=0 00:07:00.104 01:14:51 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.104 01:14:51 -- target/filesystem.sh@37 -- # kill -0 536836 00:07:00.104 01:14:51 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.104 01:14:51 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.104 01:14:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.104 01:14:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.104 00:07:00.104 real 0m1.275s 00:07:00.104 user 0m0.017s 00:07:00.104 sys 0m0.115s 00:07:00.104 01:14:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.104 01:14:51 -- common/autotest_common.sh@10 -- # set +x 00:07:00.104 ************************************ 00:07:00.104 END TEST filesystem_in_capsule_btrfs 00:07:00.104 ************************************ 00:07:00.104 01:14:51 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:00.104 01:14:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:00.104 01:14:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.104 01:14:51 -- common/autotest_common.sh@10 -- # set +x 00:07:00.104 ************************************ 00:07:00.104 START TEST filesystem_in_capsule_xfs 00:07:00.104 ************************************ 00:07:00.104 01:14:51 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:00.104 01:14:51 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:00.104 01:14:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.104 01:14:51 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:00.104 01:14:51 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:00.104 01:14:51 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:00.104 01:14:51 -- common/autotest_common.sh@904 -- # local i=0 00:07:00.104 01:14:51 -- common/autotest_common.sh@905 -- # local force 00:07:00.104 01:14:51 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:00.104 01:14:51 -- common/autotest_common.sh@910 -- # force=-f 00:07:00.104 01:14:51 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:00.392 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:00.392 = sectsz=512 attr=2, projid32bit=1 00:07:00.392 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:00.392 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:00.392 data = bsize=4096 blocks=130560, imaxpct=25 00:07:00.392 = sunit=0 swidth=0 blks 00:07:00.392 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:00.392 log =internal log bsize=4096 blocks=16384, version=2 00:07:00.392 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:00.392 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:01.341 Discarding blocks...Done. 00:07:01.341 01:14:52 -- common/autotest_common.sh@921 -- # return 0 00:07:01.342 01:14:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.893 01:14:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.893 01:14:55 -- target/filesystem.sh@25 -- # sync 00:07:03.893 01:14:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.893 01:14:55 -- target/filesystem.sh@27 -- # sync 00:07:03.893 01:14:55 -- target/filesystem.sh@29 -- # i=0 00:07:03.893 01:14:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.893 01:14:55 -- target/filesystem.sh@37 -- # kill -0 536836 00:07:03.893 01:14:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.893 01:14:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.893 01:14:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.893 01:14:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.893 00:07:03.893 real 0m3.453s 00:07:03.893 user 0m0.009s 00:07:03.893 sys 0m0.063s 00:07:03.893 01:14:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.893 01:14:55 -- common/autotest_common.sh@10 -- # set +x 00:07:03.893 ************************************ 00:07:03.893 END TEST filesystem_in_capsule_xfs 00:07:03.893 ************************************ 00:07:03.893 01:14:55 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:03.893 01:14:55 -- target/filesystem.sh@93 -- # sync 00:07:03.893 01:14:55 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:03.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.893 01:14:55 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:03.893 01:14:55 -- common/autotest_common.sh@1198 -- # local i=0 00:07:03.893 01:14:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:03.893 01:14:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.893 01:14:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:03.893 01:14:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.893 01:14:55 -- common/autotest_common.sh@1210 -- # return 0 00:07:03.893 01:14:55 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.893 01:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:03.893 01:14:55 -- common/autotest_common.sh@10 -- # set +x 00:07:03.893 01:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:03.893 01:14:55 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:03.893 01:14:55 -- target/filesystem.sh@101 -- # killprocess 536836 00:07:03.893 01:14:55 -- common/autotest_common.sh@926 -- # '[' -z 536836 ']' 00:07:03.893 01:14:55 -- common/autotest_common.sh@930 -- # kill -0 536836 00:07:03.893 01:14:55 -- common/autotest_common.sh@931 -- # uname 00:07:03.893 01:14:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:03.893 01:14:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 536836 00:07:03.893 01:14:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:03.893 01:14:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:03.893 01:14:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 536836' 00:07:03.893 killing process with pid 536836 00:07:03.893 01:14:55 -- common/autotest_common.sh@945 -- # kill 536836 00:07:03.893 01:14:55 -- common/autotest_common.sh@950 -- # wait 536836 00:07:04.464 01:14:56 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:04.464 00:07:04.464 real 0m12.564s 00:07:04.464 user 0m48.205s 00:07:04.464 sys 0m1.836s 00:07:04.464 01:14:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.464 01:14:56 -- common/autotest_common.sh@10 -- # set +x 00:07:04.464 ************************************ 00:07:04.464 END TEST nvmf_filesystem_in_capsule 00:07:04.464 ************************************ 00:07:04.464 01:14:56 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:04.464 01:14:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:04.464 01:14:56 -- nvmf/common.sh@116 -- # sync 00:07:04.465 01:14:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:04.465 01:14:56 -- nvmf/common.sh@119 -- # set +e 00:07:04.465 01:14:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:04.465 01:14:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:04.465 rmmod nvme_tcp 00:07:04.465 rmmod nvme_fabrics 00:07:04.465 rmmod nvme_keyring 00:07:04.465 01:14:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:04.465 01:14:56 -- nvmf/common.sh@123 -- # set -e 00:07:04.465 01:14:56 -- nvmf/common.sh@124 -- # return 0 00:07:04.465 01:14:56 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:04.465 01:14:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:04.465 01:14:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:04.465 01:14:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:04.465 01:14:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:04.465 01:14:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:04.465 01:14:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.465 01:14:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.465 01:14:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.384 01:14:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:06.384 00:07:06.384 real 0m30.775s 00:07:06.384 user 1m42.368s 00:07:06.384 sys 0m5.288s 00:07:06.384 01:14:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.384 01:14:58 -- common/autotest_common.sh@10 -- # set +x 00:07:06.384 ************************************ 00:07:06.384 END TEST nvmf_filesystem 00:07:06.384 ************************************ 00:07:06.641 01:14:58 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:06.641 01:14:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:06.641 01:14:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.641 01:14:58 -- common/autotest_common.sh@10 -- # set +x 00:07:06.641 ************************************ 00:07:06.641 START TEST nvmf_discovery 00:07:06.641 ************************************ 00:07:06.641 01:14:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:06.641 * Looking for test storage... 00:07:06.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.641 01:14:58 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.641 01:14:58 -- nvmf/common.sh@7 -- # uname -s 00:07:06.641 01:14:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.641 01:14:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.641 01:14:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.641 01:14:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.641 01:14:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.641 01:14:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.641 01:14:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.641 01:14:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.641 01:14:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.641 01:14:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.641 01:14:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.641 01:14:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.641 01:14:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.641 01:14:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.641 01:14:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.641 01:14:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.641 01:14:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.641 01:14:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.641 01:14:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.641 01:14:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.641 01:14:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.641 01:14:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.641 01:14:58 -- paths/export.sh@5 -- # export PATH 00:07:06.641 01:14:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.641 01:14:58 -- nvmf/common.sh@46 -- # : 0 00:07:06.641 01:14:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:06.641 01:14:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:06.641 01:14:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:06.641 01:14:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.641 01:14:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.641 01:14:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:06.641 01:14:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:06.641 01:14:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:06.641 01:14:58 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:06.641 01:14:58 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:06.641 01:14:58 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:06.641 01:14:58 -- target/discovery.sh@15 -- # hash nvme 00:07:06.641 01:14:58 -- target/discovery.sh@20 -- # nvmftestinit 00:07:06.641 01:14:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:06.641 01:14:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.641 01:14:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:06.641 01:14:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:06.641 01:14:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:06.641 01:14:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.641 01:14:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.641 01:14:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.641 01:14:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:06.641 01:14:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:06.641 01:14:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:06.641 01:14:58 -- common/autotest_common.sh@10 -- # set +x 00:07:08.544 01:15:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:08.544 01:15:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:08.544 01:15:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:08.544 01:15:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:08.544 01:15:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:08.544 01:15:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:08.544 01:15:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:08.544 01:15:00 -- nvmf/common.sh@294 -- # net_devs=() 00:07:08.544 01:15:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:08.544 01:15:00 -- nvmf/common.sh@295 -- # e810=() 00:07:08.544 01:15:00 -- nvmf/common.sh@295 -- # local -ga e810 00:07:08.544 01:15:00 -- nvmf/common.sh@296 -- # x722=() 00:07:08.544 01:15:00 -- nvmf/common.sh@296 -- # local -ga x722 00:07:08.544 01:15:00 -- nvmf/common.sh@297 -- # mlx=() 00:07:08.544 01:15:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:08.544 01:15:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.544 01:15:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:08.544 01:15:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:08.544 01:15:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:08.544 01:15:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:08.544 01:15:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:08.544 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:08.544 01:15:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:08.544 01:15:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:08.544 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:08.544 01:15:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:08.544 01:15:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:08.544 01:15:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.544 01:15:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:08.544 01:15:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.544 01:15:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:08.544 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:08.544 01:15:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.544 01:15:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:08.544 01:15:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.544 01:15:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:08.544 01:15:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.544 01:15:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:08.544 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:08.544 01:15:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.544 01:15:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:08.544 01:15:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:08.544 01:15:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:08.544 01:15:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:08.544 01:15:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.544 01:15:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.544 01:15:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.544 01:15:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:08.544 01:15:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.544 01:15:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.544 01:15:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:08.544 01:15:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.544 01:15:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.545 01:15:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:08.545 01:15:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:08.545 01:15:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.545 01:15:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.545 01:15:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.545 01:15:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.545 01:15:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:08.545 01:15:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.545 01:15:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.545 01:15:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.545 01:15:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:08.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:07:08.545 00:07:08.545 --- 10.0.0.2 ping statistics --- 00:07:08.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.545 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:07:08.545 01:15:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:07:08.545 00:07:08.545 --- 10.0.0.1 ping statistics --- 00:07:08.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.545 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:07:08.545 01:15:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.545 01:15:00 -- nvmf/common.sh@410 -- # return 0 00:07:08.545 01:15:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:08.545 01:15:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.545 01:15:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:08.545 01:15:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:08.545 01:15:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.545 01:15:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:08.545 01:15:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:08.545 01:15:00 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:08.545 01:15:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:08.545 01:15:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:08.545 01:15:00 -- common/autotest_common.sh@10 -- # set +x 00:07:08.545 01:15:00 -- nvmf/common.sh@469 -- # nvmfpid=540516 00:07:08.545 01:15:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.545 01:15:00 -- nvmf/common.sh@470 -- # waitforlisten 540516 00:07:08.545 01:15:00 -- common/autotest_common.sh@819 -- # '[' -z 540516 ']' 00:07:08.545 01:15:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.545 01:15:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:08.545 01:15:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.545 01:15:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:08.545 01:15:00 -- common/autotest_common.sh@10 -- # set +x 00:07:08.803 [2024-07-27 01:15:00.330601] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:08.803 [2024-07-27 01:15:00.330690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.803 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.803 [2024-07-27 01:15:00.401774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.803 [2024-07-27 01:15:00.522404] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:08.803 [2024-07-27 01:15:00.522552] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.803 [2024-07-27 01:15:00.522577] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.803 [2024-07-27 01:15:00.522593] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.803 [2024-07-27 01:15:00.522659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.803 [2024-07-27 01:15:00.522712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.803 [2024-07-27 01:15:00.522772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.803 [2024-07-27 01:15:00.522775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.737 01:15:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:09.737 01:15:01 -- common/autotest_common.sh@852 -- # return 0 00:07:09.737 01:15:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:09.737 01:15:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.737 01:15:01 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 [2024-07-27 01:15:01.288516] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@26 -- # seq 1 4 00:07:09.737 01:15:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:09.737 01:15:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 Null1 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 [2024-07-27 01:15:01.328789] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:09.737 01:15:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 Null2 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:09.737 01:15:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 Null3 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:09.737 01:15:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 Null4 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:09.737 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.737 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.737 01:15:01 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:09.996 00:07:09.996 Discovery Log Number of Records 6, Generation counter 6 00:07:09.996 =====Discovery Log Entry 0====== 00:07:09.996 trtype: tcp 00:07:09.996 adrfam: ipv4 00:07:09.996 subtype: current discovery subsystem 00:07:09.996 treq: not required 00:07:09.996 portid: 0 00:07:09.996 trsvcid: 4420 00:07:09.996 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:09.996 traddr: 10.0.0.2 00:07:09.996 eflags: explicit discovery connections, duplicate discovery information 00:07:09.996 sectype: none 00:07:09.996 =====Discovery Log Entry 1====== 00:07:09.996 trtype: tcp 00:07:09.996 adrfam: ipv4 00:07:09.996 subtype: nvme subsystem 00:07:09.996 treq: not required 00:07:09.996 portid: 0 00:07:09.996 trsvcid: 4420 00:07:09.996 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:09.996 traddr: 10.0.0.2 00:07:09.996 eflags: none 00:07:09.996 sectype: none 00:07:09.996 =====Discovery Log Entry 2====== 00:07:09.996 trtype: tcp 00:07:09.996 adrfam: ipv4 00:07:09.996 subtype: nvme subsystem 00:07:09.996 treq: not required 00:07:09.996 portid: 0 00:07:09.996 trsvcid: 4420 00:07:09.996 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:09.996 traddr: 10.0.0.2 00:07:09.996 eflags: none 00:07:09.996 sectype: none 00:07:09.996 =====Discovery Log Entry 3====== 00:07:09.996 trtype: tcp 00:07:09.996 adrfam: ipv4 00:07:09.996 subtype: nvme subsystem 00:07:09.996 treq: not required 00:07:09.996 portid: 0 00:07:09.996 trsvcid: 4420 00:07:09.996 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:09.996 traddr: 10.0.0.2 00:07:09.996 eflags: none 00:07:09.996 sectype: none 00:07:09.996 =====Discovery Log Entry 4====== 00:07:09.996 trtype: tcp 00:07:09.996 adrfam: ipv4 00:07:09.996 subtype: nvme subsystem 00:07:09.996 treq: not required 00:07:09.996 portid: 0 00:07:09.996 trsvcid: 4420 00:07:09.997 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:09.997 traddr: 10.0.0.2 00:07:09.997 eflags: none 00:07:09.997 sectype: none 00:07:09.997 =====Discovery Log Entry 5====== 00:07:09.997 trtype: tcp 00:07:09.997 adrfam: ipv4 00:07:09.997 subtype: discovery subsystem referral 00:07:09.997 treq: not required 00:07:09.997 portid: 0 00:07:09.997 trsvcid: 4430 00:07:09.997 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:09.997 traddr: 10.0.0.2 00:07:09.997 eflags: none 00:07:09.997 sectype: none 00:07:09.997 01:15:01 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:09.997 Perform nvmf subsystem discovery via RPC 00:07:09.997 01:15:01 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 [2024-07-27 01:15:01.657756] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:09.997 [ 00:07:09.997 { 00:07:09.997 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:09.997 "subtype": "Discovery", 00:07:09.997 "listen_addresses": [ 00:07:09.997 { 00:07:09.997 "transport": "TCP", 00:07:09.997 "trtype": "TCP", 00:07:09.997 "adrfam": "IPv4", 00:07:09.997 "traddr": "10.0.0.2", 00:07:09.997 "trsvcid": "4420" 00:07:09.997 } 00:07:09.997 ], 00:07:09.997 "allow_any_host": true, 00:07:09.997 "hosts": [] 00:07:09.997 }, 00:07:09.997 { 00:07:09.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:09.997 "subtype": "NVMe", 00:07:09.997 "listen_addresses": [ 00:07:09.997 { 00:07:09.997 "transport": "TCP", 00:07:09.997 "trtype": "TCP", 00:07:09.997 "adrfam": "IPv4", 00:07:09.997 "traddr": "10.0.0.2", 00:07:09.997 "trsvcid": "4420" 00:07:09.997 } 00:07:09.997 ], 00:07:09.997 "allow_any_host": true, 00:07:09.997 "hosts": [], 00:07:09.997 "serial_number": "SPDK00000000000001", 00:07:09.997 "model_number": "SPDK bdev Controller", 00:07:09.997 "max_namespaces": 32, 00:07:09.997 "min_cntlid": 1, 00:07:09.997 "max_cntlid": 65519, 00:07:09.997 "namespaces": [ 00:07:09.997 { 00:07:09.997 "nsid": 1, 00:07:09.997 "bdev_name": "Null1", 00:07:09.997 "name": "Null1", 00:07:09.997 "nguid": "D270D46A81424E6EA5D540E70AAB7A28", 00:07:09.997 "uuid": "d270d46a-8142-4e6e-a5d5-40e70aab7a28" 00:07:09.997 } 00:07:09.997 ] 00:07:09.997 }, 00:07:09.997 { 00:07:09.997 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:09.997 "subtype": "NVMe", 00:07:09.997 "listen_addresses": [ 00:07:09.997 { 00:07:09.997 "transport": "TCP", 00:07:09.997 "trtype": "TCP", 00:07:09.997 "adrfam": "IPv4", 00:07:09.997 "traddr": "10.0.0.2", 00:07:09.997 "trsvcid": "4420" 00:07:09.997 } 00:07:09.997 ], 00:07:09.997 "allow_any_host": true, 00:07:09.997 "hosts": [], 00:07:09.997 "serial_number": "SPDK00000000000002", 00:07:09.997 "model_number": "SPDK bdev Controller", 00:07:09.997 "max_namespaces": 32, 00:07:09.997 "min_cntlid": 1, 00:07:09.997 "max_cntlid": 65519, 00:07:09.997 "namespaces": [ 00:07:09.997 { 00:07:09.997 "nsid": 1, 00:07:09.997 "bdev_name": "Null2", 00:07:09.997 "name": "Null2", 00:07:09.997 "nguid": "BB8D3F88A6584965A496420F3A0A5597", 00:07:09.997 "uuid": "bb8d3f88-a658-4965-a496-420f3a0a5597" 00:07:09.997 } 00:07:09.997 ] 00:07:09.997 }, 00:07:09.997 { 00:07:09.997 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:09.997 "subtype": "NVMe", 00:07:09.997 "listen_addresses": [ 00:07:09.997 { 00:07:09.997 "transport": "TCP", 00:07:09.997 "trtype": "TCP", 00:07:09.997 "adrfam": "IPv4", 00:07:09.997 "traddr": "10.0.0.2", 00:07:09.997 "trsvcid": "4420" 00:07:09.997 } 00:07:09.997 ], 00:07:09.997 "allow_any_host": true, 00:07:09.997 "hosts": [], 00:07:09.997 "serial_number": "SPDK00000000000003", 00:07:09.997 "model_number": "SPDK bdev Controller", 00:07:09.997 "max_namespaces": 32, 00:07:09.997 "min_cntlid": 1, 00:07:09.997 "max_cntlid": 65519, 00:07:09.997 "namespaces": [ 00:07:09.997 { 00:07:09.997 "nsid": 1, 00:07:09.997 "bdev_name": "Null3", 00:07:09.997 "name": "Null3", 00:07:09.997 "nguid": "0393E862E68E4FC8A04F1AC2B5776976", 00:07:09.997 "uuid": "0393e862-e68e-4fc8-a04f-1ac2b5776976" 00:07:09.997 } 00:07:09.997 ] 00:07:09.997 }, 00:07:09.997 { 00:07:09.997 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:09.997 "subtype": "NVMe", 00:07:09.997 "listen_addresses": [ 00:07:09.997 { 00:07:09.997 "transport": "TCP", 00:07:09.997 "trtype": "TCP", 00:07:09.997 "adrfam": "IPv4", 00:07:09.997 "traddr": "10.0.0.2", 00:07:09.997 "trsvcid": "4420" 00:07:09.997 } 00:07:09.997 ], 00:07:09.997 "allow_any_host": true, 00:07:09.997 "hosts": [], 00:07:09.997 "serial_number": "SPDK00000000000004", 00:07:09.997 "model_number": "SPDK bdev Controller", 00:07:09.997 "max_namespaces": 32, 00:07:09.997 "min_cntlid": 1, 00:07:09.997 "max_cntlid": 65519, 00:07:09.997 "namespaces": [ 00:07:09.997 { 00:07:09.997 "nsid": 1, 00:07:09.997 "bdev_name": "Null4", 00:07:09.997 "name": "Null4", 00:07:09.997 "nguid": "AC42AADB710A42F195522CE14DE7B0EF", 00:07:09.997 "uuid": "ac42aadb-710a-42f1-9552-2ce14de7b0ef" 00:07:09.997 } 00:07:09.997 ] 00:07:09.997 } 00:07:09.997 ] 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@42 -- # seq 1 4 00:07:09.997 01:15:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:09.997 01:15:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:09.997 01:15:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:09.997 01:15:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:09.997 01:15:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.997 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.997 01:15:01 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:09.997 01:15:01 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:09.997 01:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.997 01:15:01 -- common/autotest_common.sh@10 -- # set +x 00:07:10.257 01:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:10.257 01:15:01 -- target/discovery.sh@49 -- # check_bdevs= 00:07:10.257 01:15:01 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:10.257 01:15:01 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:10.257 01:15:01 -- target/discovery.sh@57 -- # nvmftestfini 00:07:10.257 01:15:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:10.257 01:15:01 -- nvmf/common.sh@116 -- # sync 00:07:10.257 01:15:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:10.257 01:15:01 -- nvmf/common.sh@119 -- # set +e 00:07:10.257 01:15:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:10.257 01:15:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:10.257 rmmod nvme_tcp 00:07:10.257 rmmod nvme_fabrics 00:07:10.257 rmmod nvme_keyring 00:07:10.257 01:15:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:10.257 01:15:01 -- nvmf/common.sh@123 -- # set -e 00:07:10.257 01:15:01 -- nvmf/common.sh@124 -- # return 0 00:07:10.257 01:15:01 -- nvmf/common.sh@477 -- # '[' -n 540516 ']' 00:07:10.257 01:15:01 -- nvmf/common.sh@478 -- # killprocess 540516 00:07:10.257 01:15:01 -- common/autotest_common.sh@926 -- # '[' -z 540516 ']' 00:07:10.257 01:15:01 -- common/autotest_common.sh@930 -- # kill -0 540516 00:07:10.257 01:15:01 -- common/autotest_common.sh@931 -- # uname 00:07:10.257 01:15:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:10.257 01:15:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 540516 00:07:10.257 01:15:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:10.257 01:15:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:10.257 01:15:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 540516' 00:07:10.257 killing process with pid 540516 00:07:10.257 01:15:01 -- common/autotest_common.sh@945 -- # kill 540516 00:07:10.257 [2024-07-27 01:15:01.873646] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:10.257 01:15:01 -- common/autotest_common.sh@950 -- # wait 540516 00:07:10.518 01:15:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:10.518 01:15:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:10.518 01:15:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:10.518 01:15:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:10.518 01:15:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:10.518 01:15:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.518 01:15:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.518 01:15:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.057 01:15:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:13.057 00:07:13.057 real 0m6.041s 00:07:13.057 user 0m7.268s 00:07:13.057 sys 0m1.784s 00:07:13.057 01:15:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.057 01:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.057 ************************************ 00:07:13.057 END TEST nvmf_discovery 00:07:13.057 ************************************ 00:07:13.057 01:15:04 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:13.057 01:15:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:13.057 01:15:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.057 01:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.057 ************************************ 00:07:13.057 START TEST nvmf_referrals 00:07:13.057 ************************************ 00:07:13.057 01:15:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:13.057 * Looking for test storage... 00:07:13.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.057 01:15:04 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.057 01:15:04 -- nvmf/common.sh@7 -- # uname -s 00:07:13.057 01:15:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.057 01:15:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.057 01:15:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.057 01:15:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.057 01:15:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.057 01:15:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.057 01:15:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.057 01:15:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.057 01:15:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.057 01:15:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.057 01:15:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.057 01:15:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.057 01:15:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.058 01:15:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.058 01:15:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.058 01:15:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.058 01:15:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.058 01:15:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.058 01:15:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.058 01:15:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.058 01:15:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.058 01:15:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.058 01:15:04 -- paths/export.sh@5 -- # export PATH 00:07:13.058 01:15:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.058 01:15:04 -- nvmf/common.sh@46 -- # : 0 00:07:13.058 01:15:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:13.058 01:15:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:13.058 01:15:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:13.058 01:15:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.058 01:15:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.058 01:15:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:13.058 01:15:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:13.058 01:15:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:13.058 01:15:04 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:13.058 01:15:04 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:13.058 01:15:04 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:13.058 01:15:04 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:13.058 01:15:04 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:13.058 01:15:04 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:13.058 01:15:04 -- target/referrals.sh@37 -- # nvmftestinit 00:07:13.058 01:15:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:13.058 01:15:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.058 01:15:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:13.058 01:15:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:13.058 01:15:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:13.058 01:15:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.058 01:15:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.058 01:15:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.058 01:15:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:13.058 01:15:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:13.058 01:15:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:13.058 01:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:14.964 01:15:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:14.964 01:15:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:14.964 01:15:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:14.964 01:15:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:14.964 01:15:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:14.964 01:15:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:14.964 01:15:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:14.964 01:15:06 -- nvmf/common.sh@294 -- # net_devs=() 00:07:14.964 01:15:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:14.964 01:15:06 -- nvmf/common.sh@295 -- # e810=() 00:07:14.964 01:15:06 -- nvmf/common.sh@295 -- # local -ga e810 00:07:14.964 01:15:06 -- nvmf/common.sh@296 -- # x722=() 00:07:14.964 01:15:06 -- nvmf/common.sh@296 -- # local -ga x722 00:07:14.964 01:15:06 -- nvmf/common.sh@297 -- # mlx=() 00:07:14.964 01:15:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:14.964 01:15:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.964 01:15:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:14.964 01:15:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:14.964 01:15:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:14.964 01:15:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:14.964 01:15:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:14.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:14.964 01:15:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:14.964 01:15:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:14.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:14.964 01:15:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:14.964 01:15:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:14.964 01:15:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:14.964 01:15:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.964 01:15:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:14.964 01:15:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.964 01:15:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:14.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:14.965 01:15:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.965 01:15:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:14.965 01:15:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.965 01:15:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:14.965 01:15:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.965 01:15:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:14.965 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:14.965 01:15:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.965 01:15:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:14.965 01:15:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:14.965 01:15:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:14.965 01:15:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:14.965 01:15:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:14.965 01:15:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.965 01:15:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.965 01:15:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.965 01:15:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:14.965 01:15:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.965 01:15:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.965 01:15:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:14.965 01:15:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.965 01:15:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.965 01:15:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:14.965 01:15:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:14.965 01:15:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.965 01:15:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.965 01:15:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.965 01:15:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.965 01:15:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:14.965 01:15:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.965 01:15:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.965 01:15:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.965 01:15:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:14.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:07:14.965 00:07:14.965 --- 10.0.0.2 ping statistics --- 00:07:14.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.965 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:07:14.965 01:15:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:07:14.965 00:07:14.965 --- 10.0.0.1 ping statistics --- 00:07:14.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.965 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:14.965 01:15:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.965 01:15:06 -- nvmf/common.sh@410 -- # return 0 00:07:14.965 01:15:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:14.965 01:15:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.965 01:15:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:14.965 01:15:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:14.965 01:15:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.965 01:15:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:14.965 01:15:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:14.965 01:15:06 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:14.965 01:15:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:14.965 01:15:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:14.965 01:15:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.965 01:15:06 -- nvmf/common.sh@469 -- # nvmfpid=542816 00:07:14.965 01:15:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:14.965 01:15:06 -- nvmf/common.sh@470 -- # waitforlisten 542816 00:07:14.965 01:15:06 -- common/autotest_common.sh@819 -- # '[' -z 542816 ']' 00:07:14.965 01:15:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.965 01:15:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:14.965 01:15:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.965 01:15:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:14.965 01:15:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.965 [2024-07-27 01:15:06.491663] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:14.965 [2024-07-27 01:15:06.491730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.965 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.965 [2024-07-27 01:15:06.558383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.965 [2024-07-27 01:15:06.678030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:14.965 [2024-07-27 01:15:06.678204] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.965 [2024-07-27 01:15:06.678224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.965 [2024-07-27 01:15:06.678238] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.965 [2024-07-27 01:15:06.678297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.965 [2024-07-27 01:15:06.678376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.965 [2024-07-27 01:15:06.678434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.965 [2024-07-27 01:15:06.678437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.901 01:15:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:15.901 01:15:07 -- common/autotest_common.sh@852 -- # return 0 00:07:15.901 01:15:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:15.901 01:15:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:15.901 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 01:15:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.901 01:15:07 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.901 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.901 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 [2024-07-27 01:15:07.483572] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.901 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.901 01:15:07 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:15.901 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.901 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 [2024-07-27 01:15:07.495790] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:15.901 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.901 01:15:07 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:15.901 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.901 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.901 01:15:07 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:15.901 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.901 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.901 01:15:07 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:15.901 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.901 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.901 01:15:07 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:15.901 01:15:07 -- target/referrals.sh@48 -- # jq length 00:07:15.901 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.901 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.901 01:15:07 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:15.901 01:15:07 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:15.901 01:15:07 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:15.901 01:15:07 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:15.901 01:15:07 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:15.901 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:15.901 01:15:07 -- target/referrals.sh@21 -- # sort 00:07:15.901 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.901 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:15.901 01:15:07 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:15.901 01:15:07 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:15.901 01:15:07 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:15.901 01:15:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:15.901 01:15:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:15.901 01:15:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:15.901 01:15:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:15.901 01:15:07 -- target/referrals.sh@26 -- # sort 00:07:16.160 01:15:07 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:16.160 01:15:07 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:16.160 01:15:07 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:16.160 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.160 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.160 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.160 01:15:07 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:16.160 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.160 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.160 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.160 01:15:07 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:16.160 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.160 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.160 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.160 01:15:07 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:16.160 01:15:07 -- target/referrals.sh@56 -- # jq length 00:07:16.160 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.160 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.160 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.160 01:15:07 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:16.160 01:15:07 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:16.160 01:15:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:16.160 01:15:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:16.160 01:15:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.160 01:15:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:16.160 01:15:07 -- target/referrals.sh@26 -- # sort 00:07:16.160 01:15:07 -- target/referrals.sh@26 -- # echo 00:07:16.160 01:15:07 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:16.160 01:15:07 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:16.160 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.160 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.160 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.160 01:15:07 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:16.160 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.160 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.419 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.419 01:15:07 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:16.419 01:15:07 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:16.419 01:15:07 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:16.419 01:15:07 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:16.419 01:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.419 01:15:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.419 01:15:07 -- target/referrals.sh@21 -- # sort 00:07:16.419 01:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.419 01:15:07 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:16.419 01:15:07 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:16.419 01:15:07 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:16.419 01:15:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:16.419 01:15:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:16.419 01:15:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.419 01:15:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:16.419 01:15:07 -- target/referrals.sh@26 -- # sort 00:07:16.419 01:15:08 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:16.419 01:15:08 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:16.419 01:15:08 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:16.419 01:15:08 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:16.419 01:15:08 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:16.419 01:15:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.419 01:15:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:16.677 01:15:08 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:16.677 01:15:08 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:16.677 01:15:08 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:16.677 01:15:08 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:16.678 01:15:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.678 01:15:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:16.678 01:15:08 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:16.678 01:15:08 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:16.678 01:15:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.678 01:15:08 -- common/autotest_common.sh@10 -- # set +x 00:07:16.678 01:15:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.678 01:15:08 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:16.678 01:15:08 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:16.678 01:15:08 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:16.678 01:15:08 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:16.678 01:15:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.678 01:15:08 -- target/referrals.sh@21 -- # sort 00:07:16.678 01:15:08 -- common/autotest_common.sh@10 -- # set +x 00:07:16.678 01:15:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.678 01:15:08 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:16.678 01:15:08 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:16.678 01:15:08 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:16.678 01:15:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:16.678 01:15:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:16.936 01:15:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.936 01:15:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:16.936 01:15:08 -- target/referrals.sh@26 -- # sort 00:07:16.936 01:15:08 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:16.936 01:15:08 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:16.936 01:15:08 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:16.936 01:15:08 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:16.936 01:15:08 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:16.936 01:15:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.936 01:15:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:16.936 01:15:08 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:16.936 01:15:08 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:16.936 01:15:08 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:16.936 01:15:08 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:16.936 01:15:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.936 01:15:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:17.195 01:15:08 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:17.195 01:15:08 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:17.195 01:15:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.195 01:15:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.195 01:15:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.195 01:15:08 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:17.195 01:15:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.195 01:15:08 -- target/referrals.sh@82 -- # jq length 00:07:17.195 01:15:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.195 01:15:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.195 01:15:08 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:17.195 01:15:08 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:17.195 01:15:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:17.195 01:15:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:17.195 01:15:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:17.195 01:15:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:17.195 01:15:08 -- target/referrals.sh@26 -- # sort 00:07:17.195 01:15:08 -- target/referrals.sh@26 -- # echo 00:07:17.195 01:15:08 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:17.195 01:15:08 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:17.195 01:15:08 -- target/referrals.sh@86 -- # nvmftestfini 00:07:17.195 01:15:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:17.195 01:15:08 -- nvmf/common.sh@116 -- # sync 00:07:17.195 01:15:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:17.195 01:15:08 -- nvmf/common.sh@119 -- # set +e 00:07:17.195 01:15:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:17.195 01:15:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:17.195 rmmod nvme_tcp 00:07:17.195 rmmod nvme_fabrics 00:07:17.195 rmmod nvme_keyring 00:07:17.195 01:15:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:17.195 01:15:08 -- nvmf/common.sh@123 -- # set -e 00:07:17.195 01:15:08 -- nvmf/common.sh@124 -- # return 0 00:07:17.195 01:15:08 -- nvmf/common.sh@477 -- # '[' -n 542816 ']' 00:07:17.195 01:15:08 -- nvmf/common.sh@478 -- # killprocess 542816 00:07:17.195 01:15:08 -- common/autotest_common.sh@926 -- # '[' -z 542816 ']' 00:07:17.195 01:15:08 -- common/autotest_common.sh@930 -- # kill -0 542816 00:07:17.195 01:15:08 -- common/autotest_common.sh@931 -- # uname 00:07:17.195 01:15:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:17.195 01:15:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 542816 00:07:17.195 01:15:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:17.195 01:15:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:17.195 01:15:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 542816' 00:07:17.195 killing process with pid 542816 00:07:17.195 01:15:08 -- common/autotest_common.sh@945 -- # kill 542816 00:07:17.195 01:15:08 -- common/autotest_common.sh@950 -- # wait 542816 00:07:17.453 01:15:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:17.453 01:15:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:17.453 01:15:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:17.453 01:15:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:17.453 01:15:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:17.453 01:15:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.453 01:15:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.453 01:15:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.019 01:15:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:20.019 00:07:20.019 real 0m7.017s 00:07:20.019 user 0m11.439s 00:07:20.019 sys 0m2.158s 00:07:20.019 01:15:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.019 01:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:20.019 ************************************ 00:07:20.019 END TEST nvmf_referrals 00:07:20.019 ************************************ 00:07:20.019 01:15:11 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:20.019 01:15:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:20.019 01:15:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.019 01:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:20.019 ************************************ 00:07:20.019 START TEST nvmf_connect_disconnect 00:07:20.019 ************************************ 00:07:20.019 01:15:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:20.019 * Looking for test storage... 00:07:20.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.019 01:15:11 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.019 01:15:11 -- nvmf/common.sh@7 -- # uname -s 00:07:20.019 01:15:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.019 01:15:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.019 01:15:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.019 01:15:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.019 01:15:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.019 01:15:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.019 01:15:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.019 01:15:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.019 01:15:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.019 01:15:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.019 01:15:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:20.019 01:15:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:20.019 01:15:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.019 01:15:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.019 01:15:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.019 01:15:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.019 01:15:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.019 01:15:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.019 01:15:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.019 01:15:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.020 01:15:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.020 01:15:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.020 01:15:11 -- paths/export.sh@5 -- # export PATH 00:07:20.020 01:15:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.020 01:15:11 -- nvmf/common.sh@46 -- # : 0 00:07:20.020 01:15:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:20.020 01:15:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:20.020 01:15:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:20.020 01:15:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.020 01:15:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.020 01:15:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:20.020 01:15:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:20.020 01:15:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:20.020 01:15:11 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:20.020 01:15:11 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:20.020 01:15:11 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:20.020 01:15:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:20.020 01:15:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.020 01:15:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:20.020 01:15:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:20.020 01:15:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:20.020 01:15:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.020 01:15:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.020 01:15:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.020 01:15:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:20.020 01:15:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:20.020 01:15:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:20.020 01:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:21.924 01:15:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:21.924 01:15:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:21.924 01:15:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:21.924 01:15:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:21.924 01:15:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:21.924 01:15:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:21.924 01:15:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:21.924 01:15:13 -- nvmf/common.sh@294 -- # net_devs=() 00:07:21.924 01:15:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:21.924 01:15:13 -- nvmf/common.sh@295 -- # e810=() 00:07:21.924 01:15:13 -- nvmf/common.sh@295 -- # local -ga e810 00:07:21.924 01:15:13 -- nvmf/common.sh@296 -- # x722=() 00:07:21.924 01:15:13 -- nvmf/common.sh@296 -- # local -ga x722 00:07:21.924 01:15:13 -- nvmf/common.sh@297 -- # mlx=() 00:07:21.924 01:15:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:21.924 01:15:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.924 01:15:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:21.924 01:15:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:21.924 01:15:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:21.924 01:15:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:21.924 01:15:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:21.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:21.924 01:15:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:21.924 01:15:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:21.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:21.924 01:15:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:21.924 01:15:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:21.924 01:15:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.924 01:15:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:21.924 01:15:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.924 01:15:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:21.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:21.924 01:15:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.924 01:15:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:21.924 01:15:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.924 01:15:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:21.924 01:15:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.924 01:15:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:21.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:21.924 01:15:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.924 01:15:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:21.924 01:15:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:21.924 01:15:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:21.924 01:15:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:21.924 01:15:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.924 01:15:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.924 01:15:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.924 01:15:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:21.924 01:15:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.924 01:15:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.924 01:15:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:21.924 01:15:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.924 01:15:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.924 01:15:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:21.924 01:15:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:21.924 01:15:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.924 01:15:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.924 01:15:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.924 01:15:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.924 01:15:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:21.924 01:15:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.924 01:15:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.924 01:15:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.924 01:15:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:21.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:07:21.924 00:07:21.924 --- 10.0.0.2 ping statistics --- 00:07:21.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.924 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:07:21.924 01:15:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:07:21.925 00:07:21.925 --- 10.0.0.1 ping statistics --- 00:07:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.925 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:07:21.925 01:15:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.925 01:15:13 -- nvmf/common.sh@410 -- # return 0 00:07:21.925 01:15:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:21.925 01:15:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.925 01:15:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:21.925 01:15:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:21.925 01:15:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.925 01:15:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:21.925 01:15:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:21.925 01:15:13 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:21.925 01:15:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:21.925 01:15:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:21.925 01:15:13 -- common/autotest_common.sh@10 -- # set +x 00:07:21.925 01:15:13 -- nvmf/common.sh@469 -- # nvmfpid=545671 00:07:21.925 01:15:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.925 01:15:13 -- nvmf/common.sh@470 -- # waitforlisten 545671 00:07:21.925 01:15:13 -- common/autotest_common.sh@819 -- # '[' -z 545671 ']' 00:07:21.925 01:15:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.925 01:15:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:21.925 01:15:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.925 01:15:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:21.925 01:15:13 -- common/autotest_common.sh@10 -- # set +x 00:07:21.925 [2024-07-27 01:15:13.543816] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:21.925 [2024-07-27 01:15:13.543890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.925 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.925 [2024-07-27 01:15:13.614004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.185 [2024-07-27 01:15:13.725604] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.185 [2024-07-27 01:15:13.725773] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.185 [2024-07-27 01:15:13.725790] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.185 [2024-07-27 01:15:13.725802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.185 [2024-07-27 01:15:13.725963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.185 [2024-07-27 01:15:13.726029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.185 [2024-07-27 01:15:13.726095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.185 [2024-07-27 01:15:13.726098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.124 01:15:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:23.124 01:15:14 -- common/autotest_common.sh@852 -- # return 0 00:07:23.124 01:15:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:23.124 01:15:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:23.124 01:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 01:15:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:23.124 01:15:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.124 01:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 [2024-07-27 01:15:14.559677] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.124 01:15:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:23.124 01:15:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.124 01:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 01:15:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.124 01:15:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.124 01:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 01:15:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:23.124 01:15:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.124 01:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 01:15:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.124 01:15:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.124 01:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 [2024-07-27 01:15:14.611751] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.124 01:15:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:23.124 01:15:14 -- target/connect_disconnect.sh@34 -- # set +x 00:07:25.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.287 01:19:06 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:15.287 01:19:06 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:15.287 01:19:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:15.287 01:19:06 -- nvmf/common.sh@116 -- # sync 00:11:15.287 01:19:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:15.287 01:19:06 -- nvmf/common.sh@119 -- # set +e 00:11:15.287 01:19:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:15.287 01:19:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:15.287 rmmod nvme_tcp 00:11:15.287 rmmod nvme_fabrics 00:11:15.287 rmmod nvme_keyring 00:11:15.287 01:19:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:15.287 01:19:06 -- nvmf/common.sh@123 -- # set -e 00:11:15.287 01:19:06 -- nvmf/common.sh@124 -- # return 0 00:11:15.287 01:19:06 -- nvmf/common.sh@477 -- # '[' -n 545671 ']' 00:11:15.287 01:19:06 -- nvmf/common.sh@478 -- # killprocess 545671 00:11:15.287 01:19:06 -- common/autotest_common.sh@926 -- # '[' -z 545671 ']' 00:11:15.287 01:19:06 -- common/autotest_common.sh@930 -- # kill -0 545671 00:11:15.287 01:19:06 -- common/autotest_common.sh@931 -- # uname 00:11:15.287 01:19:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:15.287 01:19:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 545671 00:11:15.287 01:19:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:15.287 01:19:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:15.287 01:19:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 545671' 00:11:15.287 killing process with pid 545671 00:11:15.287 01:19:06 -- common/autotest_common.sh@945 -- # kill 545671 00:11:15.287 01:19:06 -- common/autotest_common.sh@950 -- # wait 545671 00:11:15.287 01:19:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:15.287 01:19:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:15.287 01:19:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:15.287 01:19:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.287 01:19:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:15.287 01:19:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.287 01:19:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.287 01:19:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.192 01:19:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:17.192 00:11:17.192 real 3m57.618s 00:11:17.192 user 15m4.433s 00:11:17.192 sys 0m35.221s 00:11:17.192 01:19:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.192 01:19:08 -- common/autotest_common.sh@10 -- # set +x 00:11:17.192 ************************************ 00:11:17.192 END TEST nvmf_connect_disconnect 00:11:17.192 ************************************ 00:11:17.192 01:19:08 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:17.192 01:19:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:17.192 01:19:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:17.192 01:19:08 -- common/autotest_common.sh@10 -- # set +x 00:11:17.192 ************************************ 00:11:17.192 START TEST nvmf_multitarget 00:11:17.192 ************************************ 00:11:17.192 01:19:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:17.451 * Looking for test storage... 00:11:17.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.451 01:19:08 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.451 01:19:08 -- nvmf/common.sh@7 -- # uname -s 00:11:17.451 01:19:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.451 01:19:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.451 01:19:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.451 01:19:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.451 01:19:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.451 01:19:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.451 01:19:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.451 01:19:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.451 01:19:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.451 01:19:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.451 01:19:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.451 01:19:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.451 01:19:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.451 01:19:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.451 01:19:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.451 01:19:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.451 01:19:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.451 01:19:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.451 01:19:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.451 01:19:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.451 01:19:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.451 01:19:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.451 01:19:08 -- paths/export.sh@5 -- # export PATH 00:11:17.451 01:19:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.451 01:19:08 -- nvmf/common.sh@46 -- # : 0 00:11:17.451 01:19:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:17.451 01:19:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:17.451 01:19:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:17.451 01:19:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.451 01:19:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.451 01:19:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:17.451 01:19:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:17.451 01:19:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:17.451 01:19:08 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:17.451 01:19:08 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:17.451 01:19:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:17.451 01:19:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.451 01:19:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:17.451 01:19:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:17.451 01:19:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:17.451 01:19:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.451 01:19:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.451 01:19:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.451 01:19:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:17.451 01:19:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:17.451 01:19:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:17.451 01:19:08 -- common/autotest_common.sh@10 -- # set +x 00:11:19.356 01:19:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:19.356 01:19:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:19.356 01:19:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:19.356 01:19:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:19.356 01:19:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:19.356 01:19:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:19.356 01:19:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:19.356 01:19:10 -- nvmf/common.sh@294 -- # net_devs=() 00:11:19.356 01:19:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:19.356 01:19:10 -- nvmf/common.sh@295 -- # e810=() 00:11:19.356 01:19:10 -- nvmf/common.sh@295 -- # local -ga e810 00:11:19.356 01:19:10 -- nvmf/common.sh@296 -- # x722=() 00:11:19.356 01:19:10 -- nvmf/common.sh@296 -- # local -ga x722 00:11:19.356 01:19:10 -- nvmf/common.sh@297 -- # mlx=() 00:11:19.356 01:19:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:19.356 01:19:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.356 01:19:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:19.356 01:19:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:19.356 01:19:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:19.356 01:19:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:19.356 01:19:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:19.356 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:19.356 01:19:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:19.356 01:19:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:19.356 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:19.356 01:19:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:19.356 01:19:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:19.356 01:19:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.356 01:19:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:19.356 01:19:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.356 01:19:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:19.356 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:19.356 01:19:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.356 01:19:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:19.356 01:19:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.356 01:19:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:19.356 01:19:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.356 01:19:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:19.356 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:19.356 01:19:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.356 01:19:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:19.356 01:19:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:19.356 01:19:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:19.356 01:19:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:19.356 01:19:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.356 01:19:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.356 01:19:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.356 01:19:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:19.356 01:19:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.356 01:19:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.356 01:19:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:19.356 01:19:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.356 01:19:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.356 01:19:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:19.356 01:19:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:19.356 01:19:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.356 01:19:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.356 01:19:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.356 01:19:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.356 01:19:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:19.356 01:19:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.356 01:19:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.356 01:19:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.356 01:19:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:19.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:11:19.356 00:11:19.356 --- 10.0.0.2 ping statistics --- 00:11:19.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.356 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:11:19.356 01:19:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:11:19.356 00:11:19.356 --- 10.0.0.1 ping statistics --- 00:11:19.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.356 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:11:19.357 01:19:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.357 01:19:10 -- nvmf/common.sh@410 -- # return 0 00:11:19.357 01:19:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:19.357 01:19:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.357 01:19:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:19.357 01:19:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:19.357 01:19:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.357 01:19:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:19.357 01:19:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:19.357 01:19:10 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:19.357 01:19:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:19.357 01:19:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:19.357 01:19:10 -- common/autotest_common.sh@10 -- # set +x 00:11:19.357 01:19:10 -- nvmf/common.sh@469 -- # nvmfpid=577810 00:11:19.357 01:19:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.357 01:19:10 -- nvmf/common.sh@470 -- # waitforlisten 577810 00:11:19.357 01:19:10 -- common/autotest_common.sh@819 -- # '[' -z 577810 ']' 00:11:19.357 01:19:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.357 01:19:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:19.357 01:19:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.357 01:19:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:19.357 01:19:10 -- common/autotest_common.sh@10 -- # set +x 00:11:19.357 [2024-07-27 01:19:11.040441] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:19.357 [2024-07-27 01:19:11.040513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.357 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.357 [2024-07-27 01:19:11.109981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.617 [2024-07-27 01:19:11.229825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:19.617 [2024-07-27 01:19:11.229988] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.617 [2024-07-27 01:19:11.230008] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.617 [2024-07-27 01:19:11.230030] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.617 [2024-07-27 01:19:11.230119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.617 [2024-07-27 01:19:11.230181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.617 [2024-07-27 01:19:11.230209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.617 [2024-07-27 01:19:11.230212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.554 01:19:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:20.554 01:19:11 -- common/autotest_common.sh@852 -- # return 0 00:11:20.554 01:19:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:20.554 01:19:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:20.554 01:19:11 -- common/autotest_common.sh@10 -- # set +x 00:11:20.554 01:19:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.554 01:19:12 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:20.554 01:19:12 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:20.554 01:19:12 -- target/multitarget.sh@21 -- # jq length 00:11:20.554 01:19:12 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:20.554 01:19:12 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:20.554 "nvmf_tgt_1" 00:11:20.554 01:19:12 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:20.812 "nvmf_tgt_2" 00:11:20.812 01:19:12 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:20.812 01:19:12 -- target/multitarget.sh@28 -- # jq length 00:11:20.812 01:19:12 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:20.812 01:19:12 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:21.072 true 00:11:21.072 01:19:12 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:21.072 true 00:11:21.072 01:19:12 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:21.072 01:19:12 -- target/multitarget.sh@35 -- # jq length 00:11:21.332 01:19:12 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:21.332 01:19:12 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:21.332 01:19:12 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:21.332 01:19:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:21.332 01:19:12 -- nvmf/common.sh@116 -- # sync 00:11:21.332 01:19:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:21.332 01:19:12 -- nvmf/common.sh@119 -- # set +e 00:11:21.332 01:19:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:21.332 01:19:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:21.332 rmmod nvme_tcp 00:11:21.332 rmmod nvme_fabrics 00:11:21.332 rmmod nvme_keyring 00:11:21.332 01:19:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:21.332 01:19:12 -- nvmf/common.sh@123 -- # set -e 00:11:21.332 01:19:12 -- nvmf/common.sh@124 -- # return 0 00:11:21.332 01:19:12 -- nvmf/common.sh@477 -- # '[' -n 577810 ']' 00:11:21.332 01:19:12 -- nvmf/common.sh@478 -- # killprocess 577810 00:11:21.332 01:19:12 -- common/autotest_common.sh@926 -- # '[' -z 577810 ']' 00:11:21.332 01:19:12 -- common/autotest_common.sh@930 -- # kill -0 577810 00:11:21.332 01:19:12 -- common/autotest_common.sh@931 -- # uname 00:11:21.332 01:19:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:21.332 01:19:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 577810 00:11:21.332 01:19:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:21.332 01:19:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:21.332 01:19:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 577810' 00:11:21.332 killing process with pid 577810 00:11:21.332 01:19:12 -- common/autotest_common.sh@945 -- # kill 577810 00:11:21.332 01:19:12 -- common/autotest_common.sh@950 -- # wait 577810 00:11:21.591 01:19:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:21.591 01:19:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:21.591 01:19:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:21.591 01:19:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.591 01:19:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:21.591 01:19:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.591 01:19:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.591 01:19:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.498 01:19:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:23.498 00:11:23.498 real 0m6.324s 00:11:23.498 user 0m9.313s 00:11:23.498 sys 0m1.881s 00:11:23.498 01:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.498 01:19:15 -- common/autotest_common.sh@10 -- # set +x 00:11:23.498 ************************************ 00:11:23.498 END TEST nvmf_multitarget 00:11:23.498 ************************************ 00:11:23.498 01:19:15 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:23.498 01:19:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:23.498 01:19:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:23.498 01:19:15 -- common/autotest_common.sh@10 -- # set +x 00:11:23.757 ************************************ 00:11:23.757 START TEST nvmf_rpc 00:11:23.757 ************************************ 00:11:23.757 01:19:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:23.757 * Looking for test storage... 00:11:23.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.757 01:19:15 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.757 01:19:15 -- nvmf/common.sh@7 -- # uname -s 00:11:23.757 01:19:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.757 01:19:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.757 01:19:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.757 01:19:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.757 01:19:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.757 01:19:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.757 01:19:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.757 01:19:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.757 01:19:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.757 01:19:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.757 01:19:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.757 01:19:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.757 01:19:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.757 01:19:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.757 01:19:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.757 01:19:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.757 01:19:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.757 01:19:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.757 01:19:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.757 01:19:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.757 01:19:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.757 01:19:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.757 01:19:15 -- paths/export.sh@5 -- # export PATH 00:11:23.757 01:19:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.757 01:19:15 -- nvmf/common.sh@46 -- # : 0 00:11:23.757 01:19:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:23.757 01:19:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:23.757 01:19:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:23.757 01:19:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.757 01:19:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.757 01:19:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:23.757 01:19:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:23.757 01:19:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:23.757 01:19:15 -- target/rpc.sh@11 -- # loops=5 00:11:23.757 01:19:15 -- target/rpc.sh@23 -- # nvmftestinit 00:11:23.757 01:19:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:23.757 01:19:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.757 01:19:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:23.757 01:19:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:23.757 01:19:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:23.757 01:19:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.757 01:19:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.757 01:19:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.757 01:19:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:23.757 01:19:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:23.757 01:19:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:23.757 01:19:15 -- common/autotest_common.sh@10 -- # set +x 00:11:25.692 01:19:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:25.692 01:19:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:25.692 01:19:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:25.692 01:19:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:25.692 01:19:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:25.692 01:19:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:25.692 01:19:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:25.692 01:19:17 -- nvmf/common.sh@294 -- # net_devs=() 00:11:25.692 01:19:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:25.692 01:19:17 -- nvmf/common.sh@295 -- # e810=() 00:11:25.692 01:19:17 -- nvmf/common.sh@295 -- # local -ga e810 00:11:25.692 01:19:17 -- nvmf/common.sh@296 -- # x722=() 00:11:25.692 01:19:17 -- nvmf/common.sh@296 -- # local -ga x722 00:11:25.692 01:19:17 -- nvmf/common.sh@297 -- # mlx=() 00:11:25.692 01:19:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:25.692 01:19:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.692 01:19:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:25.692 01:19:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:25.692 01:19:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:25.692 01:19:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:25.692 01:19:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:25.692 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:25.692 01:19:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:25.692 01:19:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:25.692 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:25.692 01:19:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:25.692 01:19:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:25.692 01:19:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.692 01:19:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:25.692 01:19:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.692 01:19:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:25.692 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:25.692 01:19:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.692 01:19:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:25.692 01:19:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.692 01:19:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:25.692 01:19:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.692 01:19:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:25.692 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:25.692 01:19:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.692 01:19:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:25.692 01:19:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:25.692 01:19:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:25.692 01:19:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:25.692 01:19:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.692 01:19:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.692 01:19:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.692 01:19:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:25.692 01:19:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.692 01:19:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.692 01:19:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:25.692 01:19:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.692 01:19:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.692 01:19:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:25.692 01:19:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:25.692 01:19:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.692 01:19:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.692 01:19:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.692 01:19:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.692 01:19:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:25.692 01:19:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.692 01:19:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.692 01:19:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.692 01:19:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:25.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:11:25.692 00:11:25.692 --- 10.0.0.2 ping statistics --- 00:11:25.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.692 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:25.692 01:19:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:11:25.692 00:11:25.692 --- 10.0.0.1 ping statistics --- 00:11:25.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.692 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:11:25.692 01:19:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.692 01:19:17 -- nvmf/common.sh@410 -- # return 0 00:11:25.692 01:19:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:25.693 01:19:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.693 01:19:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:25.693 01:19:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:25.693 01:19:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.693 01:19:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:25.693 01:19:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:25.693 01:19:17 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:25.693 01:19:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:25.693 01:19:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:25.693 01:19:17 -- common/autotest_common.sh@10 -- # set +x 00:11:25.693 01:19:17 -- nvmf/common.sh@469 -- # nvmfpid=579987 00:11:25.693 01:19:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.693 01:19:17 -- nvmf/common.sh@470 -- # waitforlisten 579987 00:11:25.693 01:19:17 -- common/autotest_common.sh@819 -- # '[' -z 579987 ']' 00:11:25.693 01:19:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.693 01:19:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:25.693 01:19:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.693 01:19:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:25.693 01:19:17 -- common/autotest_common.sh@10 -- # set +x 00:11:25.953 [2024-07-27 01:19:17.472584] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:25.953 [2024-07-27 01:19:17.472662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.953 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.953 [2024-07-27 01:19:17.539019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.953 [2024-07-27 01:19:17.645419] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:25.953 [2024-07-27 01:19:17.645570] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.953 [2024-07-27 01:19:17.645588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.953 [2024-07-27 01:19:17.645601] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.953 [2024-07-27 01:19:17.645673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.953 [2024-07-27 01:19:17.645726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.953 [2024-07-27 01:19:17.645793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.953 [2024-07-27 01:19:17.645795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.890 01:19:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:26.890 01:19:18 -- common/autotest_common.sh@852 -- # return 0 00:11:26.890 01:19:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:26.890 01:19:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:26.890 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:26.890 01:19:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.890 01:19:18 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:26.890 01:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:26.890 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:26.890 01:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:26.890 01:19:18 -- target/rpc.sh@26 -- # stats='{ 00:11:26.890 "tick_rate": 2700000000, 00:11:26.890 "poll_groups": [ 00:11:26.890 { 00:11:26.890 "name": "nvmf_tgt_poll_group_0", 00:11:26.890 "admin_qpairs": 0, 00:11:26.890 "io_qpairs": 0, 00:11:26.890 "current_admin_qpairs": 0, 00:11:26.890 "current_io_qpairs": 0, 00:11:26.890 "pending_bdev_io": 0, 00:11:26.890 "completed_nvme_io": 0, 00:11:26.890 "transports": [] 00:11:26.890 }, 00:11:26.890 { 00:11:26.890 "name": "nvmf_tgt_poll_group_1", 00:11:26.891 "admin_qpairs": 0, 00:11:26.891 "io_qpairs": 0, 00:11:26.891 "current_admin_qpairs": 0, 00:11:26.891 "current_io_qpairs": 0, 00:11:26.891 "pending_bdev_io": 0, 00:11:26.891 "completed_nvme_io": 0, 00:11:26.891 "transports": [] 00:11:26.891 }, 00:11:26.891 { 00:11:26.891 "name": "nvmf_tgt_poll_group_2", 00:11:26.891 "admin_qpairs": 0, 00:11:26.891 "io_qpairs": 0, 00:11:26.891 "current_admin_qpairs": 0, 00:11:26.891 "current_io_qpairs": 0, 00:11:26.891 "pending_bdev_io": 0, 00:11:26.891 "completed_nvme_io": 0, 00:11:26.891 "transports": [] 00:11:26.891 }, 00:11:26.891 { 00:11:26.891 "name": "nvmf_tgt_poll_group_3", 00:11:26.891 "admin_qpairs": 0, 00:11:26.891 "io_qpairs": 0, 00:11:26.891 "current_admin_qpairs": 0, 00:11:26.891 "current_io_qpairs": 0, 00:11:26.891 "pending_bdev_io": 0, 00:11:26.891 "completed_nvme_io": 0, 00:11:26.891 "transports": [] 00:11:26.891 } 00:11:26.891 ] 00:11:26.891 }' 00:11:26.891 01:19:18 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:26.891 01:19:18 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:26.891 01:19:18 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:26.891 01:19:18 -- target/rpc.sh@15 -- # wc -l 00:11:26.891 01:19:18 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:26.891 01:19:18 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:26.891 01:19:18 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:26.891 01:19:18 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.891 01:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:26.891 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:26.891 [2024-07-27 01:19:18.583028] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.891 01:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:26.891 01:19:18 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:26.891 01:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:26.891 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:26.891 01:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:26.891 01:19:18 -- target/rpc.sh@33 -- # stats='{ 00:11:26.891 "tick_rate": 2700000000, 00:11:26.891 "poll_groups": [ 00:11:26.891 { 00:11:26.891 "name": "nvmf_tgt_poll_group_0", 00:11:26.891 "admin_qpairs": 0, 00:11:26.891 "io_qpairs": 0, 00:11:26.891 "current_admin_qpairs": 0, 00:11:26.891 "current_io_qpairs": 0, 00:11:26.891 "pending_bdev_io": 0, 00:11:26.891 "completed_nvme_io": 0, 00:11:26.891 "transports": [ 00:11:26.891 { 00:11:26.891 "trtype": "TCP" 00:11:26.891 } 00:11:26.891 ] 00:11:26.891 }, 00:11:26.891 { 00:11:26.891 "name": "nvmf_tgt_poll_group_1", 00:11:26.891 "admin_qpairs": 0, 00:11:26.891 "io_qpairs": 0, 00:11:26.891 "current_admin_qpairs": 0, 00:11:26.891 "current_io_qpairs": 0, 00:11:26.891 "pending_bdev_io": 0, 00:11:26.891 "completed_nvme_io": 0, 00:11:26.891 "transports": [ 00:11:26.891 { 00:11:26.891 "trtype": "TCP" 00:11:26.891 } 00:11:26.891 ] 00:11:26.891 }, 00:11:26.891 { 00:11:26.891 "name": "nvmf_tgt_poll_group_2", 00:11:26.891 "admin_qpairs": 0, 00:11:26.891 "io_qpairs": 0, 00:11:26.891 "current_admin_qpairs": 0, 00:11:26.891 "current_io_qpairs": 0, 00:11:26.891 "pending_bdev_io": 0, 00:11:26.891 "completed_nvme_io": 0, 00:11:26.891 "transports": [ 00:11:26.891 { 00:11:26.891 "trtype": "TCP" 00:11:26.891 } 00:11:26.891 ] 00:11:26.891 }, 00:11:26.891 { 00:11:26.891 "name": "nvmf_tgt_poll_group_3", 00:11:26.891 "admin_qpairs": 0, 00:11:26.891 "io_qpairs": 0, 00:11:26.891 "current_admin_qpairs": 0, 00:11:26.891 "current_io_qpairs": 0, 00:11:26.891 "pending_bdev_io": 0, 00:11:26.891 "completed_nvme_io": 0, 00:11:26.891 "transports": [ 00:11:26.891 { 00:11:26.891 "trtype": "TCP" 00:11:26.891 } 00:11:26.891 ] 00:11:26.891 } 00:11:26.891 ] 00:11:26.891 }' 00:11:26.891 01:19:18 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:26.891 01:19:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:26.891 01:19:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:26.891 01:19:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:26.891 01:19:18 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:26.891 01:19:18 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:26.891 01:19:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:26.891 01:19:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:26.891 01:19:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:27.149 01:19:18 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:27.149 01:19:18 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:27.149 01:19:18 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:27.149 01:19:18 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:27.149 01:19:18 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:27.149 01:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.149 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.149 Malloc1 00:11:27.149 01:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.149 01:19:18 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:27.149 01:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.149 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.149 01:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.149 01:19:18 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.149 01:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.149 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.149 01:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.149 01:19:18 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:27.149 01:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.149 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.149 01:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.149 01:19:18 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.149 01:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.149 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.149 [2024-07-27 01:19:18.738806] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.149 01:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.149 01:19:18 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:27.149 01:19:18 -- common/autotest_common.sh@640 -- # local es=0 00:11:27.149 01:19:18 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:27.149 01:19:18 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:27.149 01:19:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:27.149 01:19:18 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:27.149 01:19:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:27.149 01:19:18 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:27.149 01:19:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:27.149 01:19:18 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:27.149 01:19:18 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:27.149 01:19:18 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:27.149 [2024-07-27 01:19:18.761327] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:27.149 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:27.149 could not add new controller: failed to write to nvme-fabrics device 00:11:27.149 01:19:18 -- common/autotest_common.sh@643 -- # es=1 00:11:27.149 01:19:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:27.149 01:19:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:27.149 01:19:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:27.149 01:19:18 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.149 01:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.149 01:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.149 01:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.150 01:19:18 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.718 01:19:19 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.718 01:19:19 -- common/autotest_common.sh@1177 -- # local i=0 00:11:27.718 01:19:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.718 01:19:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:27.718 01:19:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:30.250 01:19:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:30.250 01:19:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:30.250 01:19:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.250 01:19:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:30.250 01:19:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.250 01:19:21 -- common/autotest_common.sh@1187 -- # return 0 00:11:30.250 01:19:21 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.250 01:19:21 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.250 01:19:21 -- common/autotest_common.sh@1198 -- # local i=0 00:11:30.250 01:19:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:30.250 01:19:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.250 01:19:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:30.250 01:19:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.250 01:19:21 -- common/autotest_common.sh@1210 -- # return 0 00:11:30.250 01:19:21 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.250 01:19:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.250 01:19:21 -- common/autotest_common.sh@10 -- # set +x 00:11:30.250 01:19:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.250 01:19:21 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.250 01:19:21 -- common/autotest_common.sh@640 -- # local es=0 00:11:30.250 01:19:21 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.250 01:19:21 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:30.250 01:19:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:30.250 01:19:21 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:30.250 01:19:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:30.250 01:19:21 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:30.250 01:19:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:30.250 01:19:21 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:30.250 01:19:21 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:30.250 01:19:21 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.250 [2024-07-27 01:19:21.571540] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:30.250 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:30.250 could not add new controller: failed to write to nvme-fabrics device 00:11:30.250 01:19:21 -- common/autotest_common.sh@643 -- # es=1 00:11:30.250 01:19:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:30.250 01:19:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:30.250 01:19:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:30.250 01:19:21 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:30.250 01:19:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.250 01:19:21 -- common/autotest_common.sh@10 -- # set +x 00:11:30.250 01:19:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.250 01:19:21 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.509 01:19:22 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.509 01:19:22 -- common/autotest_common.sh@1177 -- # local i=0 00:11:30.509 01:19:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.509 01:19:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:30.509 01:19:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:33.035 01:19:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:33.035 01:19:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:33.035 01:19:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.035 01:19:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:33.035 01:19:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.035 01:19:24 -- common/autotest_common.sh@1187 -- # return 0 00:11:33.035 01:19:24 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.035 01:19:24 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.035 01:19:24 -- common/autotest_common.sh@1198 -- # local i=0 00:11:33.035 01:19:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:33.035 01:19:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.035 01:19:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:33.035 01:19:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.035 01:19:24 -- common/autotest_common.sh@1210 -- # return 0 00:11:33.035 01:19:24 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.035 01:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.035 01:19:24 -- common/autotest_common.sh@10 -- # set +x 00:11:33.035 01:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.035 01:19:24 -- target/rpc.sh@81 -- # seq 1 5 00:11:33.035 01:19:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:33.035 01:19:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:33.035 01:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.035 01:19:24 -- common/autotest_common.sh@10 -- # set +x 00:11:33.035 01:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.035 01:19:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.035 01:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.035 01:19:24 -- common/autotest_common.sh@10 -- # set +x 00:11:33.035 [2024-07-27 01:19:24.302803] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.035 01:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.035 01:19:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:33.035 01:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.035 01:19:24 -- common/autotest_common.sh@10 -- # set +x 00:11:33.035 01:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.035 01:19:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:33.035 01:19:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.035 01:19:24 -- common/autotest_common.sh@10 -- # set +x 00:11:33.035 01:19:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.035 01:19:24 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.293 01:19:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.293 01:19:24 -- common/autotest_common.sh@1177 -- # local i=0 00:11:33.293 01:19:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.293 01:19:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:33.293 01:19:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:35.190 01:19:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:35.190 01:19:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:35.190 01:19:26 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.190 01:19:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:35.190 01:19:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.190 01:19:26 -- common/autotest_common.sh@1187 -- # return 0 00:11:35.190 01:19:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.448 01:19:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.448 01:19:27 -- common/autotest_common.sh@1198 -- # local i=0 00:11:35.448 01:19:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:35.448 01:19:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.448 01:19:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:35.448 01:19:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.448 01:19:27 -- common/autotest_common.sh@1210 -- # return 0 00:11:35.448 01:19:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:35.448 01:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.448 01:19:27 -- common/autotest_common.sh@10 -- # set +x 00:11:35.448 01:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.448 01:19:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.448 01:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.448 01:19:27 -- common/autotest_common.sh@10 -- # set +x 00:11:35.448 01:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.448 01:19:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:35.448 01:19:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.448 01:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.448 01:19:27 -- common/autotest_common.sh@10 -- # set +x 00:11:35.448 01:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.448 01:19:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.448 01:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.448 01:19:27 -- common/autotest_common.sh@10 -- # set +x 00:11:35.448 [2024-07-27 01:19:27.068944] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.448 01:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.448 01:19:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:35.448 01:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.448 01:19:27 -- common/autotest_common.sh@10 -- # set +x 00:11:35.448 01:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.448 01:19:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.448 01:19:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.448 01:19:27 -- common/autotest_common.sh@10 -- # set +x 00:11:35.448 01:19:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.448 01:19:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.014 01:19:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.014 01:19:27 -- common/autotest_common.sh@1177 -- # local i=0 00:11:36.014 01:19:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.014 01:19:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:36.014 01:19:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:38.540 01:19:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:38.540 01:19:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:38.540 01:19:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.540 01:19:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:38.540 01:19:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.540 01:19:29 -- common/autotest_common.sh@1187 -- # return 0 00:11:38.540 01:19:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.540 01:19:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.540 01:19:29 -- common/autotest_common.sh@1198 -- # local i=0 00:11:38.540 01:19:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:38.540 01:19:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.540 01:19:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:38.540 01:19:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.540 01:19:29 -- common/autotest_common.sh@1210 -- # return 0 00:11:38.540 01:19:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:38.540 01:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:38.540 01:19:29 -- common/autotest_common.sh@10 -- # set +x 00:11:38.540 01:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:38.540 01:19:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.540 01:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:38.540 01:19:29 -- common/autotest_common.sh@10 -- # set +x 00:11:38.540 01:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:38.540 01:19:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:38.540 01:19:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:38.540 01:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:38.540 01:19:29 -- common/autotest_common.sh@10 -- # set +x 00:11:38.540 01:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:38.540 01:19:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.540 01:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:38.540 01:19:29 -- common/autotest_common.sh@10 -- # set +x 00:11:38.540 [2024-07-27 01:19:29.876208] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.540 01:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:38.540 01:19:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:38.540 01:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:38.540 01:19:29 -- common/autotest_common.sh@10 -- # set +x 00:11:38.540 01:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:38.540 01:19:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:38.540 01:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:38.540 01:19:29 -- common/autotest_common.sh@10 -- # set +x 00:11:38.540 01:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:38.540 01:19:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.798 01:19:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.798 01:19:30 -- common/autotest_common.sh@1177 -- # local i=0 00:11:38.798 01:19:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.798 01:19:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:38.798 01:19:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:41.324 01:19:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:41.324 01:19:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:41.324 01:19:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.324 01:19:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:41.324 01:19:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.324 01:19:32 -- common/autotest_common.sh@1187 -- # return 0 00:11:41.324 01:19:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.324 01:19:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.324 01:19:32 -- common/autotest_common.sh@1198 -- # local i=0 00:11:41.324 01:19:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:41.324 01:19:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.324 01:19:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:41.324 01:19:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.324 01:19:32 -- common/autotest_common.sh@1210 -- # return 0 00:11:41.324 01:19:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:41.324 01:19:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:41.324 01:19:32 -- common/autotest_common.sh@10 -- # set +x 00:11:41.324 01:19:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:41.324 01:19:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.324 01:19:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:41.324 01:19:32 -- common/autotest_common.sh@10 -- # set +x 00:11:41.324 01:19:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:41.324 01:19:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:41.324 01:19:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:41.324 01:19:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:41.324 01:19:32 -- common/autotest_common.sh@10 -- # set +x 00:11:41.324 01:19:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:41.324 01:19:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.324 01:19:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:41.324 01:19:32 -- common/autotest_common.sh@10 -- # set +x 00:11:41.324 [2024-07-27 01:19:32.642468] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.324 01:19:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:41.324 01:19:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:41.324 01:19:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:41.324 01:19:32 -- common/autotest_common.sh@10 -- # set +x 00:11:41.324 01:19:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:41.324 01:19:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:41.324 01:19:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:41.324 01:19:32 -- common/autotest_common.sh@10 -- # set +x 00:11:41.324 01:19:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:41.324 01:19:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.582 01:19:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.582 01:19:33 -- common/autotest_common.sh@1177 -- # local i=0 00:11:41.582 01:19:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.582 01:19:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:41.582 01:19:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:44.108 01:19:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:44.108 01:19:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:44.108 01:19:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.108 01:19:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:44.108 01:19:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.108 01:19:35 -- common/autotest_common.sh@1187 -- # return 0 00:11:44.108 01:19:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.108 01:19:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.108 01:19:35 -- common/autotest_common.sh@1198 -- # local i=0 00:11:44.108 01:19:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:44.108 01:19:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.108 01:19:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:44.108 01:19:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.108 01:19:35 -- common/autotest_common.sh@1210 -- # return 0 00:11:44.108 01:19:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.108 01:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.108 01:19:35 -- common/autotest_common.sh@10 -- # set +x 00:11:44.108 01:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.108 01:19:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.108 01:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.108 01:19:35 -- common/autotest_common.sh@10 -- # set +x 00:11:44.108 01:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.108 01:19:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:44.108 01:19:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.108 01:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.108 01:19:35 -- common/autotest_common.sh@10 -- # set +x 00:11:44.108 01:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.108 01:19:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.108 01:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.108 01:19:35 -- common/autotest_common.sh@10 -- # set +x 00:11:44.108 [2024-07-27 01:19:35.413528] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.108 01:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.108 01:19:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:44.108 01:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.108 01:19:35 -- common/autotest_common.sh@10 -- # set +x 00:11:44.108 01:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.108 01:19:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.108 01:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.108 01:19:35 -- common/autotest_common.sh@10 -- # set +x 00:11:44.108 01:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.108 01:19:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.366 01:19:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.366 01:19:36 -- common/autotest_common.sh@1177 -- # local i=0 00:11:44.366 01:19:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.366 01:19:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:44.366 01:19:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:46.289 01:19:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:46.289 01:19:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:46.289 01:19:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.289 01:19:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:46.289 01:19:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.289 01:19:38 -- common/autotest_common.sh@1187 -- # return 0 00:11:46.289 01:19:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.547 01:19:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.547 01:19:38 -- common/autotest_common.sh@1198 -- # local i=0 00:11:46.547 01:19:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:46.547 01:19:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.547 01:19:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:46.547 01:19:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.547 01:19:38 -- common/autotest_common.sh@1210 -- # return 0 00:11:46.547 01:19:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.547 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.547 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.547 01:19:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.547 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.547 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.547 01:19:38 -- target/rpc.sh@99 -- # seq 1 5 00:11:46.547 01:19:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.547 01:19:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.547 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.547 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.547 01:19:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.547 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.547 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 [2024-07-27 01:19:38.136842] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.547 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.547 01:19:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.547 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.547 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.547 01:19:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.547 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.547 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.547 01:19:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.547 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.547 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.547 01:19:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.547 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.547 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.547 01:19:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.548 01:19:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 [2024-07-27 01:19:38.184888] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.548 01:19:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 [2024-07-27 01:19:38.233075] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.548 01:19:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 [2024-07-27 01:19:38.281255] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.548 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.548 01:19:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.548 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.548 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.806 01:19:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.806 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.806 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.806 01:19:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.806 01:19:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.806 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.806 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.806 01:19:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.806 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.806 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 [2024-07-27 01:19:38.329431] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.806 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.806 01:19:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.806 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.806 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.806 01:19:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.806 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.806 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.806 01:19:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.806 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.806 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.806 01:19:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.806 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.806 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.806 01:19:38 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:46.806 01:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.806 01:19:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 01:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.806 01:19:38 -- target/rpc.sh@110 -- # stats='{ 00:11:46.806 "tick_rate": 2700000000, 00:11:46.806 "poll_groups": [ 00:11:46.806 { 00:11:46.806 "name": "nvmf_tgt_poll_group_0", 00:11:46.806 "admin_qpairs": 2, 00:11:46.806 "io_qpairs": 84, 00:11:46.806 "current_admin_qpairs": 0, 00:11:46.806 "current_io_qpairs": 0, 00:11:46.806 "pending_bdev_io": 0, 00:11:46.806 "completed_nvme_io": 208, 00:11:46.806 "transports": [ 00:11:46.806 { 00:11:46.806 "trtype": "TCP" 00:11:46.806 } 00:11:46.806 ] 00:11:46.806 }, 00:11:46.806 { 00:11:46.806 "name": "nvmf_tgt_poll_group_1", 00:11:46.806 "admin_qpairs": 2, 00:11:46.806 "io_qpairs": 84, 00:11:46.806 "current_admin_qpairs": 0, 00:11:46.806 "current_io_qpairs": 0, 00:11:46.806 "pending_bdev_io": 0, 00:11:46.806 "completed_nvme_io": 208, 00:11:46.806 "transports": [ 00:11:46.806 { 00:11:46.806 "trtype": "TCP" 00:11:46.806 } 00:11:46.806 ] 00:11:46.806 }, 00:11:46.806 { 00:11:46.806 "name": "nvmf_tgt_poll_group_2", 00:11:46.806 "admin_qpairs": 1, 00:11:46.806 "io_qpairs": 84, 00:11:46.806 "current_admin_qpairs": 0, 00:11:46.806 "current_io_qpairs": 0, 00:11:46.806 "pending_bdev_io": 0, 00:11:46.806 "completed_nvme_io": 135, 00:11:46.806 "transports": [ 00:11:46.806 { 00:11:46.806 "trtype": "TCP" 00:11:46.806 } 00:11:46.806 ] 00:11:46.806 }, 00:11:46.806 { 00:11:46.806 "name": "nvmf_tgt_poll_group_3", 00:11:46.806 "admin_qpairs": 2, 00:11:46.806 "io_qpairs": 84, 00:11:46.806 "current_admin_qpairs": 0, 00:11:46.806 "current_io_qpairs": 0, 00:11:46.806 "pending_bdev_io": 0, 00:11:46.806 "completed_nvme_io": 135, 00:11:46.806 "transports": [ 00:11:46.806 { 00:11:46.806 "trtype": "TCP" 00:11:46.806 } 00:11:46.806 ] 00:11:46.806 } 00:11:46.806 ] 00:11:46.806 }' 00:11:46.806 01:19:38 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:46.806 01:19:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:46.806 01:19:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:46.806 01:19:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.806 01:19:38 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:46.806 01:19:38 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:46.806 01:19:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:46.806 01:19:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:46.806 01:19:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.806 01:19:38 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:46.806 01:19:38 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:46.806 01:19:38 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:46.806 01:19:38 -- target/rpc.sh@123 -- # nvmftestfini 00:11:46.806 01:19:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:46.806 01:19:38 -- nvmf/common.sh@116 -- # sync 00:11:46.806 01:19:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:46.806 01:19:38 -- nvmf/common.sh@119 -- # set +e 00:11:46.806 01:19:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:46.806 01:19:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:46.806 rmmod nvme_tcp 00:11:46.806 rmmod nvme_fabrics 00:11:46.806 rmmod nvme_keyring 00:11:46.806 01:19:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:46.806 01:19:38 -- nvmf/common.sh@123 -- # set -e 00:11:46.806 01:19:38 -- nvmf/common.sh@124 -- # return 0 00:11:46.806 01:19:38 -- nvmf/common.sh@477 -- # '[' -n 579987 ']' 00:11:46.806 01:19:38 -- nvmf/common.sh@478 -- # killprocess 579987 00:11:46.806 01:19:38 -- common/autotest_common.sh@926 -- # '[' -z 579987 ']' 00:11:46.806 01:19:38 -- common/autotest_common.sh@930 -- # kill -0 579987 00:11:46.806 01:19:38 -- common/autotest_common.sh@931 -- # uname 00:11:46.806 01:19:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:46.806 01:19:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 579987 00:11:46.806 01:19:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:46.806 01:19:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:46.806 01:19:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 579987' 00:11:46.806 killing process with pid 579987 00:11:46.806 01:19:38 -- common/autotest_common.sh@945 -- # kill 579987 00:11:46.806 01:19:38 -- common/autotest_common.sh@950 -- # wait 579987 00:11:47.372 01:19:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:47.372 01:19:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:47.372 01:19:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:47.372 01:19:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.372 01:19:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:47.372 01:19:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.372 01:19:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.372 01:19:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.277 01:19:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:49.277 00:11:49.277 real 0m25.658s 00:11:49.277 user 1m23.935s 00:11:49.277 sys 0m4.172s 00:11:49.277 01:19:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.277 01:19:40 -- common/autotest_common.sh@10 -- # set +x 00:11:49.277 ************************************ 00:11:49.277 END TEST nvmf_rpc 00:11:49.277 ************************************ 00:11:49.277 01:19:40 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:49.277 01:19:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:49.277 01:19:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:49.277 01:19:40 -- common/autotest_common.sh@10 -- # set +x 00:11:49.277 ************************************ 00:11:49.277 START TEST nvmf_invalid 00:11:49.277 ************************************ 00:11:49.277 01:19:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:49.277 * Looking for test storage... 00:11:49.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.277 01:19:40 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.277 01:19:40 -- nvmf/common.sh@7 -- # uname -s 00:11:49.277 01:19:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.277 01:19:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.277 01:19:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.277 01:19:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.277 01:19:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.277 01:19:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.277 01:19:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.277 01:19:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.277 01:19:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.277 01:19:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.277 01:19:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.277 01:19:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.277 01:19:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.277 01:19:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.277 01:19:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.277 01:19:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.277 01:19:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.277 01:19:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.277 01:19:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.277 01:19:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.277 01:19:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.277 01:19:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.277 01:19:40 -- paths/export.sh@5 -- # export PATH 00:11:49.277 01:19:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.277 01:19:40 -- nvmf/common.sh@46 -- # : 0 00:11:49.277 01:19:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:49.277 01:19:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:49.277 01:19:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:49.278 01:19:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.278 01:19:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.278 01:19:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:49.278 01:19:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:49.278 01:19:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:49.278 01:19:41 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:49.278 01:19:41 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.278 01:19:41 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:49.278 01:19:41 -- target/invalid.sh@14 -- # target=foobar 00:11:49.278 01:19:41 -- target/invalid.sh@16 -- # RANDOM=0 00:11:49.278 01:19:41 -- target/invalid.sh@34 -- # nvmftestinit 00:11:49.278 01:19:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:49.278 01:19:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.278 01:19:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:49.278 01:19:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:49.278 01:19:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:49.278 01:19:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.278 01:19:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.278 01:19:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.278 01:19:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:49.278 01:19:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:49.278 01:19:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:49.278 01:19:41 -- common/autotest_common.sh@10 -- # set +x 00:11:51.179 01:19:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:51.179 01:19:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:51.179 01:19:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:51.179 01:19:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:51.179 01:19:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:51.179 01:19:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:51.179 01:19:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:51.179 01:19:42 -- nvmf/common.sh@294 -- # net_devs=() 00:11:51.179 01:19:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:51.179 01:19:42 -- nvmf/common.sh@295 -- # e810=() 00:11:51.179 01:19:42 -- nvmf/common.sh@295 -- # local -ga e810 00:11:51.179 01:19:42 -- nvmf/common.sh@296 -- # x722=() 00:11:51.179 01:19:42 -- nvmf/common.sh@296 -- # local -ga x722 00:11:51.179 01:19:42 -- nvmf/common.sh@297 -- # mlx=() 00:11:51.179 01:19:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:51.179 01:19:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.179 01:19:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:51.179 01:19:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:51.179 01:19:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:51.179 01:19:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:51.179 01:19:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:51.179 01:19:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:51.179 01:19:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:51.179 01:19:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:51.179 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:51.180 01:19:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:51.180 01:19:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:51.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:51.180 01:19:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:51.180 01:19:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:51.180 01:19:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.180 01:19:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:51.180 01:19:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.180 01:19:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:51.180 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:51.180 01:19:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.180 01:19:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:51.180 01:19:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.180 01:19:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:51.180 01:19:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.180 01:19:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:51.180 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:51.180 01:19:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.180 01:19:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:51.180 01:19:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:51.180 01:19:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:51.180 01:19:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:51.180 01:19:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.180 01:19:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.180 01:19:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.180 01:19:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:51.180 01:19:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.180 01:19:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.180 01:19:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:51.180 01:19:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.180 01:19:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.180 01:19:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:51.180 01:19:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:51.180 01:19:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.180 01:19:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.438 01:19:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.438 01:19:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.438 01:19:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:51.438 01:19:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.438 01:19:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.438 01:19:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.438 01:19:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:51.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:11:51.438 00:11:51.438 --- 10.0.0.2 ping statistics --- 00:11:51.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.438 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:51.438 01:19:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:11:51.438 00:11:51.438 --- 10.0.0.1 ping statistics --- 00:11:51.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.438 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:51.438 01:19:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.438 01:19:43 -- nvmf/common.sh@410 -- # return 0 00:11:51.438 01:19:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:51.438 01:19:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.438 01:19:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:51.438 01:19:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:51.438 01:19:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.438 01:19:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:51.438 01:19:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:51.438 01:19:43 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:51.438 01:19:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:51.438 01:19:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:51.438 01:19:43 -- common/autotest_common.sh@10 -- # set +x 00:11:51.438 01:19:43 -- nvmf/common.sh@469 -- # nvmfpid=584636 00:11:51.438 01:19:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.438 01:19:43 -- nvmf/common.sh@470 -- # waitforlisten 584636 00:11:51.438 01:19:43 -- common/autotest_common.sh@819 -- # '[' -z 584636 ']' 00:11:51.439 01:19:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.439 01:19:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:51.439 01:19:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.439 01:19:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:51.439 01:19:43 -- common/autotest_common.sh@10 -- # set +x 00:11:51.439 [2024-07-27 01:19:43.106480] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:51.439 [2024-07-27 01:19:43.106554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.439 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.439 [2024-07-27 01:19:43.186104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.700 [2024-07-27 01:19:43.310381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:51.700 [2024-07-27 01:19:43.310554] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.700 [2024-07-27 01:19:43.310574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.700 [2024-07-27 01:19:43.310588] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.700 [2024-07-27 01:19:43.313089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.700 [2024-07-27 01:19:43.313143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.700 [2024-07-27 01:19:43.313200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.700 [2024-07-27 01:19:43.313204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.633 01:19:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:52.633 01:19:44 -- common/autotest_common.sh@852 -- # return 0 00:11:52.633 01:19:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:52.633 01:19:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:52.633 01:19:44 -- common/autotest_common.sh@10 -- # set +x 00:11:52.633 01:19:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.633 01:19:44 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:52.633 01:19:44 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24936 00:11:52.633 [2024-07-27 01:19:44.380478] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:52.890 01:19:44 -- target/invalid.sh@40 -- # out='request: 00:11:52.890 { 00:11:52.890 "nqn": "nqn.2016-06.io.spdk:cnode24936", 00:11:52.890 "tgt_name": "foobar", 00:11:52.890 "method": "nvmf_create_subsystem", 00:11:52.890 "req_id": 1 00:11:52.890 } 00:11:52.890 Got JSON-RPC error response 00:11:52.890 response: 00:11:52.890 { 00:11:52.890 "code": -32603, 00:11:52.890 "message": "Unable to find target foobar" 00:11:52.890 }' 00:11:52.890 01:19:44 -- target/invalid.sh@41 -- # [[ request: 00:11:52.890 { 00:11:52.890 "nqn": "nqn.2016-06.io.spdk:cnode24936", 00:11:52.890 "tgt_name": "foobar", 00:11:52.890 "method": "nvmf_create_subsystem", 00:11:52.890 "req_id": 1 00:11:52.890 } 00:11:52.890 Got JSON-RPC error response 00:11:52.890 response: 00:11:52.890 { 00:11:52.890 "code": -32603, 00:11:52.890 "message": "Unable to find target foobar" 00:11:52.890 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:52.890 01:19:44 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:52.890 01:19:44 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16197 00:11:53.147 [2024-07-27 01:19:44.665426] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16197: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:53.147 01:19:44 -- target/invalid.sh@45 -- # out='request: 00:11:53.147 { 00:11:53.147 "nqn": "nqn.2016-06.io.spdk:cnode16197", 00:11:53.147 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:53.147 "method": "nvmf_create_subsystem", 00:11:53.147 "req_id": 1 00:11:53.147 } 00:11:53.147 Got JSON-RPC error response 00:11:53.147 response: 00:11:53.147 { 00:11:53.147 "code": -32602, 00:11:53.147 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:53.147 }' 00:11:53.147 01:19:44 -- target/invalid.sh@46 -- # [[ request: 00:11:53.148 { 00:11:53.148 "nqn": "nqn.2016-06.io.spdk:cnode16197", 00:11:53.148 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:53.148 "method": "nvmf_create_subsystem", 00:11:53.148 "req_id": 1 00:11:53.148 } 00:11:53.148 Got JSON-RPC error response 00:11:53.148 response: 00:11:53.148 { 00:11:53.148 "code": -32602, 00:11:53.148 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:53.148 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:53.148 01:19:44 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:53.148 01:19:44 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31713 00:11:53.405 [2024-07-27 01:19:44.926223] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31713: invalid model number 'SPDK_Controller' 00:11:53.405 01:19:44 -- target/invalid.sh@50 -- # out='request: 00:11:53.405 { 00:11:53.405 "nqn": "nqn.2016-06.io.spdk:cnode31713", 00:11:53.405 "model_number": "SPDK_Controller\u001f", 00:11:53.405 "method": "nvmf_create_subsystem", 00:11:53.405 "req_id": 1 00:11:53.405 } 00:11:53.405 Got JSON-RPC error response 00:11:53.405 response: 00:11:53.405 { 00:11:53.405 "code": -32602, 00:11:53.405 "message": "Invalid MN SPDK_Controller\u001f" 00:11:53.405 }' 00:11:53.405 01:19:44 -- target/invalid.sh@51 -- # [[ request: 00:11:53.405 { 00:11:53.405 "nqn": "nqn.2016-06.io.spdk:cnode31713", 00:11:53.405 "model_number": "SPDK_Controller\u001f", 00:11:53.405 "method": "nvmf_create_subsystem", 00:11:53.405 "req_id": 1 00:11:53.405 } 00:11:53.405 Got JSON-RPC error response 00:11:53.405 response: 00:11:53.405 { 00:11:53.405 "code": -32602, 00:11:53.405 "message": "Invalid MN SPDK_Controller\u001f" 00:11:53.405 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:53.405 01:19:44 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:53.405 01:19:44 -- target/invalid.sh@19 -- # local length=21 ll 00:11:53.405 01:19:44 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:53.405 01:19:44 -- target/invalid.sh@21 -- # local chars 00:11:53.405 01:19:44 -- target/invalid.sh@22 -- # local string 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 93 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # string+=']' 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 103 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # string+=g 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 74 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # string+=J 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 67 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # string+=C 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 58 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # string+=: 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 122 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # string+=z 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 45 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # string+=- 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 41 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # string+=')' 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 52 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # string+=4 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.405 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.405 01:19:44 -- target/invalid.sh@25 -- # printf %x 43 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+=+ 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # printf %x 115 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+=s 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # printf %x 77 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+=M 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # printf %x 99 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+=c 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # printf %x 56 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+=8 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # printf %x 112 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+=p 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # printf %x 92 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+='\' 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # printf %x 93 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+=']' 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # printf %x 67 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+=C 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # printf %x 47 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:53.406 01:19:44 -- target/invalid.sh@25 -- # string+=/ 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:45 -- target/invalid.sh@25 -- # printf %x 50 00:11:53.406 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:53.406 01:19:45 -- target/invalid.sh@25 -- # string+=2 00:11:53.406 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:45 -- target/invalid.sh@25 -- # printf %x 70 00:11:53.406 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:53.406 01:19:45 -- target/invalid.sh@25 -- # string+=F 00:11:53.406 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.406 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.406 01:19:45 -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:11:53.406 01:19:45 -- target/invalid.sh@31 -- # echo ']gJC:z-)4+sMc8p\]C/2F' 00:11:53.406 01:19:45 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ']gJC:z-)4+sMc8p\]C/2F' nqn.2016-06.io.spdk:cnode27485 00:11:53.664 [2024-07-27 01:19:45.219222] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27485: invalid serial number ']gJC:z-)4+sMc8p\]C/2F' 00:11:53.664 01:19:45 -- target/invalid.sh@54 -- # out='request: 00:11:53.664 { 00:11:53.664 "nqn": "nqn.2016-06.io.spdk:cnode27485", 00:11:53.664 "serial_number": "]gJC:z-)4+sMc8p\\]C/2F", 00:11:53.664 "method": "nvmf_create_subsystem", 00:11:53.664 "req_id": 1 00:11:53.664 } 00:11:53.664 Got JSON-RPC error response 00:11:53.664 response: 00:11:53.664 { 00:11:53.664 "code": -32602, 00:11:53.664 "message": "Invalid SN ]gJC:z-)4+sMc8p\\]C/2F" 00:11:53.664 }' 00:11:53.664 01:19:45 -- target/invalid.sh@55 -- # [[ request: 00:11:53.664 { 00:11:53.664 "nqn": "nqn.2016-06.io.spdk:cnode27485", 00:11:53.664 "serial_number": "]gJC:z-)4+sMc8p\\]C/2F", 00:11:53.664 "method": "nvmf_create_subsystem", 00:11:53.664 "req_id": 1 00:11:53.664 } 00:11:53.664 Got JSON-RPC error response 00:11:53.664 response: 00:11:53.664 { 00:11:53.664 "code": -32602, 00:11:53.664 "message": "Invalid SN ]gJC:z-)4+sMc8p\\]C/2F" 00:11:53.664 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:53.664 01:19:45 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:53.664 01:19:45 -- target/invalid.sh@19 -- # local length=41 ll 00:11:53.664 01:19:45 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:53.664 01:19:45 -- target/invalid.sh@21 -- # local chars 00:11:53.664 01:19:45 -- target/invalid.sh@22 -- # local string 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 103 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=g 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 117 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=u 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 34 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+='"' 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 56 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=8 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 91 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+='[' 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 42 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+='*' 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 90 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=Z 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 103 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=g 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 80 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=P 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 54 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=6 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 123 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+='{' 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 89 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=Y 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 48 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=0 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 53 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=5 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 111 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=o 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 100 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=d 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 46 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=. 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 115 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=s 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 56 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # string+=8 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.664 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.664 01:19:45 -- target/invalid.sh@25 -- # printf %x 66 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=B 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 97 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=a 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 34 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+='"' 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 96 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+='`' 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 92 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+='\' 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 94 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+='^' 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 49 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=1 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 98 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=b 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 33 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+='!' 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 53 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=5 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 52 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=4 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 52 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=4 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 114 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=r 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 34 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+='"' 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 98 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=b 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 78 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=N 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 51 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=3 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 55 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=7 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 36 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+='$' 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 126 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+='~' 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 95 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=_ 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # printf %x 117 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:53.665 01:19:45 -- target/invalid.sh@25 -- # string+=u 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.665 01:19:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.665 01:19:45 -- target/invalid.sh@28 -- # [[ g == \- ]] 00:11:53.665 01:19:45 -- target/invalid.sh@31 -- # echo 'gu"8[*ZgP6{Y05od.s8Ba"`\^1b!544r"bN37$~_u' 00:11:53.665 01:19:45 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'gu"8[*ZgP6{Y05od.s8Ba"`\^1b!544r"bN37$~_u' nqn.2016-06.io.spdk:cnode12428 00:11:53.923 [2024-07-27 01:19:45.608530] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12428: invalid model number 'gu"8[*ZgP6{Y05od.s8Ba"`\^1b!544r"bN37$~_u' 00:11:53.923 01:19:45 -- target/invalid.sh@58 -- # out='request: 00:11:53.923 { 00:11:53.923 "nqn": "nqn.2016-06.io.spdk:cnode12428", 00:11:53.923 "model_number": "gu\"8[*ZgP6{Y05od.s8Ba\"`\\^1b!544r\"bN37$~_u", 00:11:53.923 "method": "nvmf_create_subsystem", 00:11:53.923 "req_id": 1 00:11:53.923 } 00:11:53.923 Got JSON-RPC error response 00:11:53.923 response: 00:11:53.923 { 00:11:53.923 "code": -32602, 00:11:53.923 "message": "Invalid MN gu\"8[*ZgP6{Y05od.s8Ba\"`\\^1b!544r\"bN37$~_u" 00:11:53.923 }' 00:11:53.923 01:19:45 -- target/invalid.sh@59 -- # [[ request: 00:11:53.923 { 00:11:53.923 "nqn": "nqn.2016-06.io.spdk:cnode12428", 00:11:53.923 "model_number": "gu\"8[*ZgP6{Y05od.s8Ba\"`\\^1b!544r\"bN37$~_u", 00:11:53.923 "method": "nvmf_create_subsystem", 00:11:53.923 "req_id": 1 00:11:53.923 } 00:11:53.923 Got JSON-RPC error response 00:11:53.923 response: 00:11:53.923 { 00:11:53.923 "code": -32602, 00:11:53.923 "message": "Invalid MN gu\"8[*ZgP6{Y05od.s8Ba\"`\\^1b!544r\"bN37$~_u" 00:11:53.923 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:53.923 01:19:45 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:54.180 [2024-07-27 01:19:45.841428] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.180 01:19:45 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:54.437 01:19:46 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:54.437 01:19:46 -- target/invalid.sh@67 -- # echo '' 00:11:54.437 01:19:46 -- target/invalid.sh@67 -- # head -n 1 00:11:54.437 01:19:46 -- target/invalid.sh@67 -- # IP= 00:11:54.437 01:19:46 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:54.694 [2024-07-27 01:19:46.339104] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:54.694 01:19:46 -- target/invalid.sh@69 -- # out='request: 00:11:54.694 { 00:11:54.694 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:54.694 "listen_address": { 00:11:54.694 "trtype": "tcp", 00:11:54.694 "traddr": "", 00:11:54.694 "trsvcid": "4421" 00:11:54.694 }, 00:11:54.695 "method": "nvmf_subsystem_remove_listener", 00:11:54.695 "req_id": 1 00:11:54.695 } 00:11:54.695 Got JSON-RPC error response 00:11:54.695 response: 00:11:54.695 { 00:11:54.695 "code": -32602, 00:11:54.695 "message": "Invalid parameters" 00:11:54.695 }' 00:11:54.695 01:19:46 -- target/invalid.sh@70 -- # [[ request: 00:11:54.695 { 00:11:54.695 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:54.695 "listen_address": { 00:11:54.695 "trtype": "tcp", 00:11:54.695 "traddr": "", 00:11:54.695 "trsvcid": "4421" 00:11:54.695 }, 00:11:54.695 "method": "nvmf_subsystem_remove_listener", 00:11:54.695 "req_id": 1 00:11:54.695 } 00:11:54.695 Got JSON-RPC error response 00:11:54.695 response: 00:11:54.695 { 00:11:54.695 "code": -32602, 00:11:54.695 "message": "Invalid parameters" 00:11:54.695 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:54.695 01:19:46 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12538 -i 0 00:11:54.952 [2024-07-27 01:19:46.567814] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12538: invalid cntlid range [0-65519] 00:11:54.952 01:19:46 -- target/invalid.sh@73 -- # out='request: 00:11:54.952 { 00:11:54.952 "nqn": "nqn.2016-06.io.spdk:cnode12538", 00:11:54.952 "min_cntlid": 0, 00:11:54.952 "method": "nvmf_create_subsystem", 00:11:54.952 "req_id": 1 00:11:54.952 } 00:11:54.952 Got JSON-RPC error response 00:11:54.952 response: 00:11:54.952 { 00:11:54.952 "code": -32602, 00:11:54.952 "message": "Invalid cntlid range [0-65519]" 00:11:54.952 }' 00:11:54.952 01:19:46 -- target/invalid.sh@74 -- # [[ request: 00:11:54.952 { 00:11:54.952 "nqn": "nqn.2016-06.io.spdk:cnode12538", 00:11:54.952 "min_cntlid": 0, 00:11:54.952 "method": "nvmf_create_subsystem", 00:11:54.952 "req_id": 1 00:11:54.952 } 00:11:54.952 Got JSON-RPC error response 00:11:54.952 response: 00:11:54.952 { 00:11:54.952 "code": -32602, 00:11:54.952 "message": "Invalid cntlid range [0-65519]" 00:11:54.952 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:54.952 01:19:46 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14341 -i 65520 00:11:55.210 [2024-07-27 01:19:46.800612] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14341: invalid cntlid range [65520-65519] 00:11:55.210 01:19:46 -- target/invalid.sh@75 -- # out='request: 00:11:55.210 { 00:11:55.210 "nqn": "nqn.2016-06.io.spdk:cnode14341", 00:11:55.210 "min_cntlid": 65520, 00:11:55.210 "method": "nvmf_create_subsystem", 00:11:55.210 "req_id": 1 00:11:55.210 } 00:11:55.210 Got JSON-RPC error response 00:11:55.210 response: 00:11:55.210 { 00:11:55.211 "code": -32602, 00:11:55.211 "message": "Invalid cntlid range [65520-65519]" 00:11:55.211 }' 00:11:55.211 01:19:46 -- target/invalid.sh@76 -- # [[ request: 00:11:55.211 { 00:11:55.211 "nqn": "nqn.2016-06.io.spdk:cnode14341", 00:11:55.211 "min_cntlid": 65520, 00:11:55.211 "method": "nvmf_create_subsystem", 00:11:55.211 "req_id": 1 00:11:55.211 } 00:11:55.211 Got JSON-RPC error response 00:11:55.211 response: 00:11:55.211 { 00:11:55.211 "code": -32602, 00:11:55.211 "message": "Invalid cntlid range [65520-65519]" 00:11:55.211 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:55.211 01:19:46 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27119 -I 0 00:11:55.468 [2024-07-27 01:19:47.049489] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27119: invalid cntlid range [1-0] 00:11:55.468 01:19:47 -- target/invalid.sh@77 -- # out='request: 00:11:55.468 { 00:11:55.468 "nqn": "nqn.2016-06.io.spdk:cnode27119", 00:11:55.468 "max_cntlid": 0, 00:11:55.468 "method": "nvmf_create_subsystem", 00:11:55.468 "req_id": 1 00:11:55.468 } 00:11:55.468 Got JSON-RPC error response 00:11:55.468 response: 00:11:55.468 { 00:11:55.468 "code": -32602, 00:11:55.468 "message": "Invalid cntlid range [1-0]" 00:11:55.468 }' 00:11:55.468 01:19:47 -- target/invalid.sh@78 -- # [[ request: 00:11:55.468 { 00:11:55.468 "nqn": "nqn.2016-06.io.spdk:cnode27119", 00:11:55.468 "max_cntlid": 0, 00:11:55.468 "method": "nvmf_create_subsystem", 00:11:55.468 "req_id": 1 00:11:55.468 } 00:11:55.468 Got JSON-RPC error response 00:11:55.468 response: 00:11:55.468 { 00:11:55.468 "code": -32602, 00:11:55.468 "message": "Invalid cntlid range [1-0]" 00:11:55.468 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:55.468 01:19:47 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12716 -I 65520 00:11:55.727 [2024-07-27 01:19:47.290329] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12716: invalid cntlid range [1-65520] 00:11:55.727 01:19:47 -- target/invalid.sh@79 -- # out='request: 00:11:55.727 { 00:11:55.727 "nqn": "nqn.2016-06.io.spdk:cnode12716", 00:11:55.727 "max_cntlid": 65520, 00:11:55.727 "method": "nvmf_create_subsystem", 00:11:55.727 "req_id": 1 00:11:55.727 } 00:11:55.727 Got JSON-RPC error response 00:11:55.727 response: 00:11:55.727 { 00:11:55.727 "code": -32602, 00:11:55.727 "message": "Invalid cntlid range [1-65520]" 00:11:55.727 }' 00:11:55.727 01:19:47 -- target/invalid.sh@80 -- # [[ request: 00:11:55.727 { 00:11:55.727 "nqn": "nqn.2016-06.io.spdk:cnode12716", 00:11:55.727 "max_cntlid": 65520, 00:11:55.727 "method": "nvmf_create_subsystem", 00:11:55.727 "req_id": 1 00:11:55.727 } 00:11:55.727 Got JSON-RPC error response 00:11:55.727 response: 00:11:55.727 { 00:11:55.727 "code": -32602, 00:11:55.727 "message": "Invalid cntlid range [1-65520]" 00:11:55.727 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:55.727 01:19:47 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22216 -i 6 -I 5 00:11:55.985 [2024-07-27 01:19:47.527137] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22216: invalid cntlid range [6-5] 00:11:55.985 01:19:47 -- target/invalid.sh@83 -- # out='request: 00:11:55.985 { 00:11:55.985 "nqn": "nqn.2016-06.io.spdk:cnode22216", 00:11:55.985 "min_cntlid": 6, 00:11:55.985 "max_cntlid": 5, 00:11:55.985 "method": "nvmf_create_subsystem", 00:11:55.985 "req_id": 1 00:11:55.985 } 00:11:55.985 Got JSON-RPC error response 00:11:55.985 response: 00:11:55.985 { 00:11:55.985 "code": -32602, 00:11:55.985 "message": "Invalid cntlid range [6-5]" 00:11:55.985 }' 00:11:55.985 01:19:47 -- target/invalid.sh@84 -- # [[ request: 00:11:55.985 { 00:11:55.985 "nqn": "nqn.2016-06.io.spdk:cnode22216", 00:11:55.985 "min_cntlid": 6, 00:11:55.985 "max_cntlid": 5, 00:11:55.985 "method": "nvmf_create_subsystem", 00:11:55.985 "req_id": 1 00:11:55.985 } 00:11:55.985 Got JSON-RPC error response 00:11:55.985 response: 00:11:55.985 { 00:11:55.985 "code": -32602, 00:11:55.985 "message": "Invalid cntlid range [6-5]" 00:11:55.985 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:55.985 01:19:47 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:55.985 01:19:47 -- target/invalid.sh@87 -- # out='request: 00:11:55.985 { 00:11:55.985 "name": "foobar", 00:11:55.985 "method": "nvmf_delete_target", 00:11:55.985 "req_id": 1 00:11:55.985 } 00:11:55.985 Got JSON-RPC error response 00:11:55.985 response: 00:11:55.985 { 00:11:55.986 "code": -32602, 00:11:55.986 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:55.986 }' 00:11:55.986 01:19:47 -- target/invalid.sh@88 -- # [[ request: 00:11:55.986 { 00:11:55.986 "name": "foobar", 00:11:55.986 "method": "nvmf_delete_target", 00:11:55.986 "req_id": 1 00:11:55.986 } 00:11:55.986 Got JSON-RPC error response 00:11:55.986 response: 00:11:55.986 { 00:11:55.986 "code": -32602, 00:11:55.986 "message": "The specified target doesn't exist, cannot delete it." 00:11:55.986 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:55.986 01:19:47 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:55.986 01:19:47 -- target/invalid.sh@91 -- # nvmftestfini 00:11:55.986 01:19:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:55.986 01:19:47 -- nvmf/common.sh@116 -- # sync 00:11:55.986 01:19:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:55.986 01:19:47 -- nvmf/common.sh@119 -- # set +e 00:11:55.986 01:19:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:55.986 01:19:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:55.986 rmmod nvme_tcp 00:11:55.986 rmmod nvme_fabrics 00:11:55.986 rmmod nvme_keyring 00:11:55.986 01:19:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:55.986 01:19:47 -- nvmf/common.sh@123 -- # set -e 00:11:55.986 01:19:47 -- nvmf/common.sh@124 -- # return 0 00:11:55.986 01:19:47 -- nvmf/common.sh@477 -- # '[' -n 584636 ']' 00:11:55.986 01:19:47 -- nvmf/common.sh@478 -- # killprocess 584636 00:11:55.986 01:19:47 -- common/autotest_common.sh@926 -- # '[' -z 584636 ']' 00:11:55.986 01:19:47 -- common/autotest_common.sh@930 -- # kill -0 584636 00:11:55.986 01:19:47 -- common/autotest_common.sh@931 -- # uname 00:11:55.986 01:19:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:55.986 01:19:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 584636 00:11:55.986 01:19:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:55.986 01:19:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:55.986 01:19:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 584636' 00:11:55.986 killing process with pid 584636 00:11:55.986 01:19:47 -- common/autotest_common.sh@945 -- # kill 584636 00:11:55.986 01:19:47 -- common/autotest_common.sh@950 -- # wait 584636 00:11:56.556 01:19:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:56.556 01:19:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:56.556 01:19:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:56.556 01:19:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.556 01:19:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:56.556 01:19:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.556 01:19:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.556 01:19:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.464 01:19:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:58.464 00:11:58.464 real 0m9.112s 00:11:58.464 user 0m22.354s 00:11:58.464 sys 0m2.376s 00:11:58.464 01:19:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.464 01:19:50 -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 ************************************ 00:11:58.464 END TEST nvmf_invalid 00:11:58.464 ************************************ 00:11:58.464 01:19:50 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:58.464 01:19:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:58.464 01:19:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.464 01:19:50 -- common/autotest_common.sh@10 -- # set +x 00:11:58.464 ************************************ 00:11:58.464 START TEST nvmf_abort 00:11:58.464 ************************************ 00:11:58.464 01:19:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:58.464 * Looking for test storage... 00:11:58.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.464 01:19:50 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.464 01:19:50 -- nvmf/common.sh@7 -- # uname -s 00:11:58.464 01:19:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.464 01:19:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.464 01:19:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.464 01:19:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.464 01:19:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.464 01:19:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.464 01:19:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.464 01:19:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.464 01:19:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.464 01:19:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.464 01:19:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.464 01:19:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.464 01:19:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.464 01:19:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.464 01:19:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.464 01:19:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.464 01:19:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.464 01:19:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.464 01:19:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.464 01:19:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.464 01:19:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.465 01:19:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.465 01:19:50 -- paths/export.sh@5 -- # export PATH 00:11:58.465 01:19:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.465 01:19:50 -- nvmf/common.sh@46 -- # : 0 00:11:58.465 01:19:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:58.465 01:19:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:58.465 01:19:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:58.465 01:19:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.465 01:19:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.465 01:19:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:58.465 01:19:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:58.465 01:19:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:58.465 01:19:50 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:58.465 01:19:50 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:58.465 01:19:50 -- target/abort.sh@14 -- # nvmftestinit 00:11:58.465 01:19:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:58.465 01:19:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.465 01:19:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:58.465 01:19:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:58.465 01:19:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:58.465 01:19:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.465 01:19:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.465 01:19:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.465 01:19:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:58.465 01:19:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:58.465 01:19:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:58.465 01:19:50 -- common/autotest_common.sh@10 -- # set +x 00:12:00.369 01:19:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:00.369 01:19:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:00.369 01:19:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:00.369 01:19:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:00.369 01:19:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:00.369 01:19:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:00.369 01:19:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:00.369 01:19:52 -- nvmf/common.sh@294 -- # net_devs=() 00:12:00.369 01:19:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:00.369 01:19:52 -- nvmf/common.sh@295 -- # e810=() 00:12:00.369 01:19:52 -- nvmf/common.sh@295 -- # local -ga e810 00:12:00.369 01:19:52 -- nvmf/common.sh@296 -- # x722=() 00:12:00.369 01:19:52 -- nvmf/common.sh@296 -- # local -ga x722 00:12:00.369 01:19:52 -- nvmf/common.sh@297 -- # mlx=() 00:12:00.369 01:19:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:00.369 01:19:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.369 01:19:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:00.369 01:19:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:00.369 01:19:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:00.369 01:19:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:00.369 01:19:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:00.369 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:00.369 01:19:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:00.369 01:19:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:00.369 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:00.369 01:19:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:00.369 01:19:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:00.369 01:19:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:00.369 01:19:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.369 01:19:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:00.369 01:19:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.369 01:19:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:00.369 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:00.370 01:19:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.370 01:19:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:00.370 01:19:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.370 01:19:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:00.370 01:19:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.370 01:19:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:00.370 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:00.370 01:19:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.370 01:19:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:00.370 01:19:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:00.370 01:19:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:00.370 01:19:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:00.370 01:19:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:00.370 01:19:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.370 01:19:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.370 01:19:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.370 01:19:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:00.370 01:19:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.370 01:19:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.370 01:19:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:00.370 01:19:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.370 01:19:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.370 01:19:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:00.370 01:19:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:00.370 01:19:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.370 01:19:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.629 01:19:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.629 01:19:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.629 01:19:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:00.629 01:19:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.629 01:19:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.629 01:19:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.629 01:19:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:00.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:12:00.629 00:12:00.629 --- 10.0.0.2 ping statistics --- 00:12:00.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.629 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:00.629 01:19:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:12:00.629 00:12:00.629 --- 10.0.0.1 ping statistics --- 00:12:00.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.629 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:12:00.629 01:19:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.629 01:19:52 -- nvmf/common.sh@410 -- # return 0 00:12:00.629 01:19:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:00.629 01:19:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.629 01:19:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:00.629 01:19:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:00.629 01:19:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.629 01:19:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:00.629 01:19:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:00.629 01:19:52 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:00.629 01:19:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:00.629 01:19:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:00.629 01:19:52 -- common/autotest_common.sh@10 -- # set +x 00:12:00.629 01:19:52 -- nvmf/common.sh@469 -- # nvmfpid=587308 00:12:00.629 01:19:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:00.629 01:19:52 -- nvmf/common.sh@470 -- # waitforlisten 587308 00:12:00.629 01:19:52 -- common/autotest_common.sh@819 -- # '[' -z 587308 ']' 00:12:00.629 01:19:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.629 01:19:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:00.629 01:19:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.629 01:19:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:00.629 01:19:52 -- common/autotest_common.sh@10 -- # set +x 00:12:00.629 [2024-07-27 01:19:52.282167] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:00.629 [2024-07-27 01:19:52.282251] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.629 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.629 [2024-07-27 01:19:52.351245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:00.888 [2024-07-27 01:19:52.469313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:00.888 [2024-07-27 01:19:52.469506] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.888 [2024-07-27 01:19:52.469526] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.888 [2024-07-27 01:19:52.469541] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.888 [2024-07-27 01:19:52.469647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.888 [2024-07-27 01:19:52.469691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.888 [2024-07-27 01:19:52.469694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.820 01:19:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:01.820 01:19:53 -- common/autotest_common.sh@852 -- # return 0 00:12:01.820 01:19:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:01.820 01:19:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:01.820 01:19:53 -- common/autotest_common.sh@10 -- # set +x 00:12:01.820 01:19:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.820 01:19:53 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:01.820 01:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.820 01:19:53 -- common/autotest_common.sh@10 -- # set +x 00:12:01.820 [2024-07-27 01:19:53.239502] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.820 01:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.820 01:19:53 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:01.820 01:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.820 01:19:53 -- common/autotest_common.sh@10 -- # set +x 00:12:01.820 Malloc0 00:12:01.820 01:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.820 01:19:53 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:01.820 01:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.820 01:19:53 -- common/autotest_common.sh@10 -- # set +x 00:12:01.820 Delay0 00:12:01.820 01:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.820 01:19:53 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:01.820 01:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.820 01:19:53 -- common/autotest_common.sh@10 -- # set +x 00:12:01.820 01:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.820 01:19:53 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:01.820 01:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.820 01:19:53 -- common/autotest_common.sh@10 -- # set +x 00:12:01.820 01:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.820 01:19:53 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:01.820 01:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.820 01:19:53 -- common/autotest_common.sh@10 -- # set +x 00:12:01.820 [2024-07-27 01:19:53.314008] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.820 01:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.820 01:19:53 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:01.820 01:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.820 01:19:53 -- common/autotest_common.sh@10 -- # set +x 00:12:01.820 01:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.820 01:19:53 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:01.820 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.821 [2024-07-27 01:19:53.420609] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:04.350 Initializing NVMe Controllers 00:12:04.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:04.350 controller IO queue size 128 less than required 00:12:04.350 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:04.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:04.350 Initialization complete. Launching workers. 00:12:04.350 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32981 00:12:04.350 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33042, failed to submit 62 00:12:04.350 success 32981, unsuccess 61, failed 0 00:12:04.350 01:19:55 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:04.350 01:19:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:04.350 01:19:55 -- common/autotest_common.sh@10 -- # set +x 00:12:04.350 01:19:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:04.350 01:19:55 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:04.350 01:19:55 -- target/abort.sh@38 -- # nvmftestfini 00:12:04.350 01:19:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:04.350 01:19:55 -- nvmf/common.sh@116 -- # sync 00:12:04.350 01:19:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:04.350 01:19:55 -- nvmf/common.sh@119 -- # set +e 00:12:04.350 01:19:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:04.350 01:19:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:04.350 rmmod nvme_tcp 00:12:04.350 rmmod nvme_fabrics 00:12:04.350 rmmod nvme_keyring 00:12:04.350 01:19:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:04.350 01:19:55 -- nvmf/common.sh@123 -- # set -e 00:12:04.350 01:19:55 -- nvmf/common.sh@124 -- # return 0 00:12:04.350 01:19:55 -- nvmf/common.sh@477 -- # '[' -n 587308 ']' 00:12:04.350 01:19:55 -- nvmf/common.sh@478 -- # killprocess 587308 00:12:04.350 01:19:55 -- common/autotest_common.sh@926 -- # '[' -z 587308 ']' 00:12:04.350 01:19:55 -- common/autotest_common.sh@930 -- # kill -0 587308 00:12:04.350 01:19:55 -- common/autotest_common.sh@931 -- # uname 00:12:04.350 01:19:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:04.350 01:19:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 587308 00:12:04.350 01:19:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:04.350 01:19:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:04.350 01:19:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 587308' 00:12:04.350 killing process with pid 587308 00:12:04.350 01:19:55 -- common/autotest_common.sh@945 -- # kill 587308 00:12:04.350 01:19:55 -- common/autotest_common.sh@950 -- # wait 587308 00:12:04.350 01:19:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:04.350 01:19:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:04.350 01:19:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:04.350 01:19:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.350 01:19:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:04.350 01:19:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.350 01:19:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.350 01:19:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.293 01:19:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:06.293 00:12:06.293 real 0m7.935s 00:12:06.293 user 0m12.882s 00:12:06.293 sys 0m2.494s 00:12:06.293 01:19:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.293 01:19:58 -- common/autotest_common.sh@10 -- # set +x 00:12:06.293 ************************************ 00:12:06.293 END TEST nvmf_abort 00:12:06.293 ************************************ 00:12:06.293 01:19:58 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:06.293 01:19:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:06.293 01:19:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:06.293 01:19:58 -- common/autotest_common.sh@10 -- # set +x 00:12:06.293 ************************************ 00:12:06.293 START TEST nvmf_ns_hotplug_stress 00:12:06.293 ************************************ 00:12:06.293 01:19:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:06.552 * Looking for test storage... 00:12:06.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.552 01:19:58 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.552 01:19:58 -- nvmf/common.sh@7 -- # uname -s 00:12:06.552 01:19:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.552 01:19:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.552 01:19:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.552 01:19:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.552 01:19:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.552 01:19:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.552 01:19:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.552 01:19:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.552 01:19:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.552 01:19:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.552 01:19:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.552 01:19:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.552 01:19:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.552 01:19:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.552 01:19:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.552 01:19:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.552 01:19:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.552 01:19:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.552 01:19:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.552 01:19:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.552 01:19:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.553 01:19:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.553 01:19:58 -- paths/export.sh@5 -- # export PATH 00:12:06.553 01:19:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.553 01:19:58 -- nvmf/common.sh@46 -- # : 0 00:12:06.553 01:19:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:06.553 01:19:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:06.553 01:19:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:06.553 01:19:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.553 01:19:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.553 01:19:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:06.553 01:19:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:06.553 01:19:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:06.553 01:19:58 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.553 01:19:58 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:06.553 01:19:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:06.553 01:19:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.553 01:19:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:06.553 01:19:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:06.553 01:19:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:06.553 01:19:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.553 01:19:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.553 01:19:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.553 01:19:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:06.553 01:19:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:06.553 01:19:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:06.553 01:19:58 -- common/autotest_common.sh@10 -- # set +x 00:12:08.459 01:20:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:08.459 01:20:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:08.459 01:20:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:08.459 01:20:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:08.459 01:20:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:08.459 01:20:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:08.459 01:20:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:08.459 01:20:00 -- nvmf/common.sh@294 -- # net_devs=() 00:12:08.459 01:20:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:08.459 01:20:00 -- nvmf/common.sh@295 -- # e810=() 00:12:08.459 01:20:00 -- nvmf/common.sh@295 -- # local -ga e810 00:12:08.459 01:20:00 -- nvmf/common.sh@296 -- # x722=() 00:12:08.459 01:20:00 -- nvmf/common.sh@296 -- # local -ga x722 00:12:08.459 01:20:00 -- nvmf/common.sh@297 -- # mlx=() 00:12:08.459 01:20:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:08.459 01:20:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.459 01:20:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:08.459 01:20:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:08.459 01:20:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:08.459 01:20:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:08.459 01:20:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:08.459 01:20:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:08.459 01:20:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:08.459 01:20:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:08.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:08.460 01:20:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:08.460 01:20:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:08.460 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:08.460 01:20:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:08.460 01:20:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:08.460 01:20:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.460 01:20:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:08.460 01:20:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.460 01:20:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:08.460 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:08.460 01:20:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.460 01:20:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:08.460 01:20:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.460 01:20:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:08.460 01:20:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.460 01:20:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:08.460 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:08.460 01:20:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.460 01:20:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:08.460 01:20:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:08.460 01:20:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:08.460 01:20:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.460 01:20:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.460 01:20:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.460 01:20:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:08.460 01:20:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.460 01:20:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.460 01:20:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:08.460 01:20:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.460 01:20:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.460 01:20:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:08.460 01:20:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:08.460 01:20:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.460 01:20:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.460 01:20:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.460 01:20:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.460 01:20:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:08.460 01:20:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.460 01:20:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.460 01:20:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.460 01:20:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:08.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:12:08.460 00:12:08.460 --- 10.0.0.2 ping statistics --- 00:12:08.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.460 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:08.460 01:20:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:12:08.460 00:12:08.460 --- 10.0.0.1 ping statistics --- 00:12:08.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.460 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:08.460 01:20:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.460 01:20:00 -- nvmf/common.sh@410 -- # return 0 00:12:08.460 01:20:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:08.460 01:20:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.460 01:20:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:08.460 01:20:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.460 01:20:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:08.460 01:20:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:08.719 01:20:00 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:08.719 01:20:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:08.719 01:20:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:08.719 01:20:00 -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 01:20:00 -- nvmf/common.sh@469 -- # nvmfpid=589689 00:12:08.719 01:20:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:08.719 01:20:00 -- nvmf/common.sh@470 -- # waitforlisten 589689 00:12:08.719 01:20:00 -- common/autotest_common.sh@819 -- # '[' -z 589689 ']' 00:12:08.719 01:20:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.719 01:20:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:08.719 01:20:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.719 01:20:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:08.719 01:20:00 -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 [2024-07-27 01:20:00.272955] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:08.719 [2024-07-27 01:20:00.273049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.719 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.719 [2024-07-27 01:20:00.337483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.719 [2024-07-27 01:20:00.446378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:08.719 [2024-07-27 01:20:00.446531] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.719 [2024-07-27 01:20:00.446548] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.719 [2024-07-27 01:20:00.446560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.719 [2024-07-27 01:20:00.446651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.719 [2024-07-27 01:20:00.446716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.719 [2024-07-27 01:20:00.446719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.663 01:20:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:09.664 01:20:01 -- common/autotest_common.sh@852 -- # return 0 00:12:09.664 01:20:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:09.664 01:20:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:09.664 01:20:01 -- common/autotest_common.sh@10 -- # set +x 00:12:09.664 01:20:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.664 01:20:01 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:09.664 01:20:01 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:09.926 [2024-07-27 01:20:01.465726] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.926 01:20:01 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:10.183 01:20:01 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.442 [2024-07-27 01:20:01.948381] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.442 01:20:01 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:10.703 01:20:02 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:10.703 Malloc0 00:12:10.961 01:20:02 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:10.961 Delay0 00:12:10.961 01:20:02 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.219 01:20:02 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:11.477 NULL1 00:12:11.477 01:20:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:11.734 01:20:03 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=590123 00:12:11.734 01:20:03 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:11.734 01:20:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:11.734 01:20:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.734 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.112 Read completed with error (sct=0, sc=11) 00:12:13.112 01:20:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.370 01:20:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:13.370 01:20:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:13.627 true 00:12:13.627 01:20:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:13.628 01:20:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.563 01:20:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.563 01:20:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:14.563 01:20:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:14.821 true 00:12:14.821 01:20:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:14.821 01:20:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.079 01:20:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.336 01:20:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:15.336 01:20:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:15.594 true 00:12:15.594 01:20:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:15.594 01:20:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.531 01:20:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.790 01:20:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:16.790 01:20:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:16.790 true 00:12:17.048 01:20:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:17.048 01:20:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.048 01:20:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.305 01:20:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:17.305 01:20:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:17.563 true 00:12:17.563 01:20:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:17.563 01:20:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.495 01:20:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.753 01:20:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:18.753 01:20:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:19.010 true 00:12:19.010 01:20:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:19.010 01:20:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.268 01:20:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.526 01:20:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:19.526 01:20:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:19.784 true 00:12:19.784 01:20:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:19.784 01:20:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.719 01:20:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.976 01:20:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:20.976 01:20:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:20.976 true 00:12:20.976 01:20:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:20.976 01:20:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.232 01:20:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.487 01:20:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:21.487 01:20:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:21.743 true 00:12:21.743 01:20:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:21.743 01:20:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.700 01:20:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.266 01:20:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:23.266 01:20:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:23.266 true 00:12:23.266 01:20:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:23.266 01:20:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.524 01:20:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.782 01:20:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:23.782 01:20:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:24.040 true 00:12:24.040 01:20:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:24.040 01:20:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.297 01:20:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.555 01:20:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:24.555 01:20:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:24.812 true 00:12:24.813 01:20:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:24.813 01:20:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.183 01:20:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.183 01:20:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:26.183 01:20:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:26.440 true 00:12:26.440 01:20:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:26.440 01:20:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.374 01:20:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.374 01:20:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:27.374 01:20:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:27.632 true 00:12:27.632 01:20:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:27.632 01:20:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.890 01:20:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.148 01:20:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:28.148 01:20:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:28.406 true 00:12:28.406 01:20:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:28.406 01:20:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.340 01:20:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.598 01:20:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:29.598 01:20:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:29.856 true 00:12:29.856 01:20:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:29.856 01:20:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.114 01:20:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.372 01:20:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:30.372 01:20:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:30.630 true 00:12:30.630 01:20:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:30.630 01:20:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.566 01:20:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.822 01:20:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:31.822 01:20:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:32.079 true 00:12:32.079 01:20:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:32.079 01:20:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.337 01:20:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.595 01:20:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:32.595 01:20:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:32.595 true 00:12:32.855 01:20:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:32.855 01:20:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:33.791 01:20:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:33.791 01:20:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:33.791 01:20:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:34.049 true 00:12:34.049 01:20:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:34.049 01:20:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.307 01:20:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.566 01:20:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:34.566 01:20:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:34.823 true 00:12:34.823 01:20:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:34.823 01:20:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.757 01:20:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.014 01:20:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:36.014 01:20:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:36.271 true 00:12:36.271 01:20:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:36.271 01:20:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.528 01:20:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.786 01:20:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:36.786 01:20:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:37.043 true 00:12:37.043 01:20:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:37.043 01:20:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.977 01:20:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:38.234 01:20:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:38.234 01:20:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:38.234 true 00:12:38.234 01:20:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:38.234 01:20:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.492 01:20:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.784 01:20:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:38.784 01:20:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:39.042 true 00:12:39.042 01:20:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:39.042 01:20:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.975 01:20:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.233 01:20:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:40.233 01:20:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:40.490 true 00:12:40.490 01:20:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:40.490 01:20:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.747 01:20:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.004 01:20:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:41.004 01:20:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:41.261 true 00:12:41.261 01:20:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:41.261 01:20:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.197 Initializing NVMe Controllers 00:12:42.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.197 Controller IO queue size 128, less than required. 00:12:42.197 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:42.197 Controller IO queue size 128, less than required. 00:12:42.197 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:42.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:42.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:42.197 Initialization complete. Launching workers. 00:12:42.197 ======================================================== 00:12:42.197 Latency(us) 00:12:42.197 Device Information : IOPS MiB/s Average min max 00:12:42.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 864.27 0.42 82576.98 2102.63 1072625.65 00:12:42.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12491.95 6.10 10246.06 1650.17 435259.14 00:12:42.197 ======================================================== 00:12:42.197 Total : 13356.22 6.52 14926.51 1650.17 1072625.65 00:12:42.197 00:12:42.197 01:20:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.454 01:20:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:42.454 01:20:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:42.712 true 00:12:42.712 01:20:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 590123 00:12:42.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (590123) - No such process 00:12:42.712 01:20:34 -- target/ns_hotplug_stress.sh@53 -- # wait 590123 00:12:42.712 01:20:34 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.970 01:20:34 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:42.971 01:20:34 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:42.971 01:20:34 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:42.971 01:20:34 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:42.971 01:20:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:42.971 01:20:34 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:43.228 null0 00:12:43.228 01:20:34 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:43.228 01:20:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:43.228 01:20:34 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:43.486 null1 00:12:43.486 01:20:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:43.486 01:20:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:43.486 01:20:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:43.744 null2 00:12:43.744 01:20:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:43.744 01:20:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:43.744 01:20:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:44.002 null3 00:12:44.002 01:20:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.002 01:20:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.002 01:20:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:44.259 null4 00:12:44.259 01:20:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.260 01:20:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.260 01:20:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:44.517 null5 00:12:44.517 01:20:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.517 01:20:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.517 01:20:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:44.776 null6 00:12:44.776 01:20:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.776 01:20:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.776 01:20:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:45.034 null7 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.034 01:20:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@66 -- # wait 594283 594284 594286 594288 594290 594293 594295 594297 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.035 01:20:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:45.293 01:20:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:45.293 01:20:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.293 01:20:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:45.293 01:20:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.293 01:20:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:45.293 01:20:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:45.293 01:20:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:45.293 01:20:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:45.551 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.552 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:45.810 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:45.810 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:45.810 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.810 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.810 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:45.810 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:45.810 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:45.810 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.069 01:20:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:46.328 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:46.328 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:46.328 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.328 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:46.328 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:46.328 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:46.328 01:20:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:46.328 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.586 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:46.845 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:46.845 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.845 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:46.845 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:46.845 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:46.845 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:46.845 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:46.845 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.103 01:20:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:47.361 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:47.362 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:47.362 01:20:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.362 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:47.362 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:47.362 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:47.362 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:47.362 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.620 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:47.878 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:47.879 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.879 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:47.879 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:47.879 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:47.879 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:47.879 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:47.879 01:20:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.137 01:20:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:48.395 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:48.395 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:48.395 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.395 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:48.395 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:48.395 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:48.395 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:48.395 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.654 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:48.912 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:48.912 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:48.912 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.912 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:48.912 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:48.912 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:48.912 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:48.912 01:20:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.171 01:20:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:49.430 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:49.430 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.430 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.430 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:49.430 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:49.430 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:49.430 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:49.430 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.689 01:20:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:49.947 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:49.947 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.947 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:49.947 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:49.947 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:49.947 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.947 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:49.947 01:20:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:50.206 01:20:41 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:50.206 01:20:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:50.206 01:20:41 -- nvmf/common.sh@116 -- # sync 00:12:50.206 01:20:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:50.206 01:20:41 -- nvmf/common.sh@119 -- # set +e 00:12:50.206 01:20:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:50.206 01:20:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:50.206 rmmod nvme_tcp 00:12:50.206 rmmod nvme_fabrics 00:12:50.206 rmmod nvme_keyring 00:12:50.206 01:20:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:50.206 01:20:41 -- nvmf/common.sh@123 -- # set -e 00:12:50.206 01:20:41 -- nvmf/common.sh@124 -- # return 0 00:12:50.206 01:20:41 -- nvmf/common.sh@477 -- # '[' -n 589689 ']' 00:12:50.206 01:20:41 -- nvmf/common.sh@478 -- # killprocess 589689 00:12:50.206 01:20:41 -- common/autotest_common.sh@926 -- # '[' -z 589689 ']' 00:12:50.206 01:20:41 -- common/autotest_common.sh@930 -- # kill -0 589689 00:12:50.206 01:20:41 -- common/autotest_common.sh@931 -- # uname 00:12:50.206 01:20:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:50.206 01:20:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 589689 00:12:50.464 01:20:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:50.464 01:20:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:50.464 01:20:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 589689' 00:12:50.464 killing process with pid 589689 00:12:50.464 01:20:41 -- common/autotest_common.sh@945 -- # kill 589689 00:12:50.464 01:20:41 -- common/autotest_common.sh@950 -- # wait 589689 00:12:50.722 01:20:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:50.722 01:20:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:50.722 01:20:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:50.722 01:20:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.722 01:20:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:50.722 01:20:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.722 01:20:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.722 01:20:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.630 01:20:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:52.630 00:12:52.630 real 0m46.252s 00:12:52.630 user 3m28.800s 00:12:52.630 sys 0m16.140s 00:12:52.630 01:20:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.630 01:20:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.630 ************************************ 00:12:52.630 END TEST nvmf_ns_hotplug_stress 00:12:52.630 ************************************ 00:12:52.630 01:20:44 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:52.630 01:20:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:52.630 01:20:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:52.630 01:20:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.630 ************************************ 00:12:52.630 START TEST nvmf_connect_stress 00:12:52.630 ************************************ 00:12:52.630 01:20:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:52.630 * Looking for test storage... 00:12:52.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.630 01:20:44 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.630 01:20:44 -- nvmf/common.sh@7 -- # uname -s 00:12:52.630 01:20:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.630 01:20:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.630 01:20:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.630 01:20:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.630 01:20:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.630 01:20:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.630 01:20:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.630 01:20:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.630 01:20:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.630 01:20:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.630 01:20:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.630 01:20:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:52.630 01:20:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.630 01:20:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.630 01:20:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.630 01:20:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.630 01:20:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.630 01:20:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.630 01:20:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.630 01:20:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.630 01:20:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.630 01:20:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.630 01:20:44 -- paths/export.sh@5 -- # export PATH 00:12:52.630 01:20:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.630 01:20:44 -- nvmf/common.sh@46 -- # : 0 00:12:52.630 01:20:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:52.630 01:20:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:52.630 01:20:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:52.630 01:20:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.630 01:20:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.630 01:20:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:52.630 01:20:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:52.630 01:20:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:52.631 01:20:44 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:52.631 01:20:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:52.631 01:20:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.631 01:20:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:52.631 01:20:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:52.631 01:20:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:52.631 01:20:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.631 01:20:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.631 01:20:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.890 01:20:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:52.890 01:20:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:52.890 01:20:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:52.890 01:20:44 -- common/autotest_common.sh@10 -- # set +x 00:12:54.794 01:20:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:54.794 01:20:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:54.794 01:20:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:54.794 01:20:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:54.794 01:20:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:54.794 01:20:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:54.794 01:20:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:54.794 01:20:46 -- nvmf/common.sh@294 -- # net_devs=() 00:12:54.794 01:20:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:54.794 01:20:46 -- nvmf/common.sh@295 -- # e810=() 00:12:54.794 01:20:46 -- nvmf/common.sh@295 -- # local -ga e810 00:12:54.794 01:20:46 -- nvmf/common.sh@296 -- # x722=() 00:12:54.794 01:20:46 -- nvmf/common.sh@296 -- # local -ga x722 00:12:54.794 01:20:46 -- nvmf/common.sh@297 -- # mlx=() 00:12:54.794 01:20:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:54.794 01:20:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.794 01:20:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:54.794 01:20:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:54.794 01:20:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:54.794 01:20:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:54.794 01:20:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:54.794 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:54.794 01:20:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:54.794 01:20:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:54.794 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:54.794 01:20:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:54.794 01:20:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:54.794 01:20:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.794 01:20:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:54.794 01:20:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.794 01:20:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:54.794 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:54.794 01:20:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.794 01:20:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:54.794 01:20:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.794 01:20:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:54.794 01:20:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.794 01:20:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:54.794 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:54.794 01:20:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.794 01:20:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:54.794 01:20:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:54.794 01:20:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:54.794 01:20:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.794 01:20:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.794 01:20:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.794 01:20:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:54.794 01:20:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.794 01:20:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.794 01:20:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:54.794 01:20:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.794 01:20:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.794 01:20:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:54.794 01:20:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:54.794 01:20:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.794 01:20:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.794 01:20:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.794 01:20:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.794 01:20:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:54.794 01:20:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.794 01:20:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.794 01:20:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.794 01:20:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:54.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:12:54.794 00:12:54.794 --- 10.0.0.2 ping statistics --- 00:12:54.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.794 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:54.794 01:20:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:12:54.794 00:12:54.794 --- 10.0.0.1 ping statistics --- 00:12:54.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.794 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:12:54.794 01:20:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.794 01:20:46 -- nvmf/common.sh@410 -- # return 0 00:12:54.794 01:20:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:54.794 01:20:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.794 01:20:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:54.794 01:20:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.794 01:20:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:54.794 01:20:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:54.794 01:20:46 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:54.794 01:20:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:54.794 01:20:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:54.794 01:20:46 -- common/autotest_common.sh@10 -- # set +x 00:12:54.794 01:20:46 -- nvmf/common.sh@469 -- # nvmfpid=597046 00:12:54.794 01:20:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:54.794 01:20:46 -- nvmf/common.sh@470 -- # waitforlisten 597046 00:12:54.794 01:20:46 -- common/autotest_common.sh@819 -- # '[' -z 597046 ']' 00:12:54.794 01:20:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.794 01:20:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:54.794 01:20:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.794 01:20:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:54.794 01:20:46 -- common/autotest_common.sh@10 -- # set +x 00:12:54.794 [2024-07-27 01:20:46.538499] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:54.794 [2024-07-27 01:20:46.538563] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.060 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.060 [2024-07-27 01:20:46.603813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.060 [2024-07-27 01:20:46.716814] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:55.060 [2024-07-27 01:20:46.716974] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.060 [2024-07-27 01:20:46.716990] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.060 [2024-07-27 01:20:46.717004] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.060 [2024-07-27 01:20:46.717098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.060 [2024-07-27 01:20:46.717125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.060 [2024-07-27 01:20:46.717128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.027 01:20:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:56.027 01:20:47 -- common/autotest_common.sh@852 -- # return 0 00:12:56.027 01:20:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:56.027 01:20:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:56.027 01:20:47 -- common/autotest_common.sh@10 -- # set +x 00:12:56.027 01:20:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.027 01:20:47 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.027 01:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.027 01:20:47 -- common/autotest_common.sh@10 -- # set +x 00:12:56.027 [2024-07-27 01:20:47.510486] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.027 01:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.027 01:20:47 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:56.027 01:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.027 01:20:47 -- common/autotest_common.sh@10 -- # set +x 00:12:56.027 01:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.027 01:20:47 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.027 01:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.027 01:20:47 -- common/autotest_common.sh@10 -- # set +x 00:12:56.027 [2024-07-27 01:20:47.539222] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.027 01:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.027 01:20:47 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:56.027 01:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.027 01:20:47 -- common/autotest_common.sh@10 -- # set +x 00:12:56.027 NULL1 00:12:56.027 01:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.027 01:20:47 -- target/connect_stress.sh@21 -- # PERF_PID=597123 00:12:56.027 01:20:47 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:56.027 01:20:47 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:56.027 01:20:47 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:56.027 01:20:47 -- target/connect_stress.sh@28 -- # cat 00:12:56.027 01:20:47 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:56.027 01:20:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.027 01:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.027 01:20:47 -- common/autotest_common.sh@10 -- # set +x 00:12:56.285 01:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.285 01:20:47 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:56.285 01:20:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.285 01:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.285 01:20:47 -- common/autotest_common.sh@10 -- # set +x 00:12:56.543 01:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.543 01:20:48 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:56.543 01:20:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.543 01:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.543 01:20:48 -- common/autotest_common.sh@10 -- # set +x 00:12:57.111 01:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.111 01:20:48 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:57.111 01:20:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.111 01:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.111 01:20:48 -- common/autotest_common.sh@10 -- # set +x 00:12:57.370 01:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.370 01:20:48 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:57.370 01:20:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.370 01:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.370 01:20:48 -- common/autotest_common.sh@10 -- # set +x 00:12:57.627 01:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.627 01:20:49 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:57.627 01:20:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.627 01:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.627 01:20:49 -- common/autotest_common.sh@10 -- # set +x 00:12:57.886 01:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.886 01:20:49 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:57.886 01:20:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.886 01:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.886 01:20:49 -- common/autotest_common.sh@10 -- # set +x 00:12:58.144 01:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.144 01:20:49 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:58.144 01:20:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.144 01:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.144 01:20:49 -- common/autotest_common.sh@10 -- # set +x 00:12:58.712 01:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.712 01:20:50 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:58.712 01:20:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.712 01:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.712 01:20:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.968 01:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.968 01:20:50 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:58.968 01:20:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.968 01:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.968 01:20:50 -- common/autotest_common.sh@10 -- # set +x 00:12:59.226 01:20:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.226 01:20:50 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:59.226 01:20:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.226 01:20:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.226 01:20:50 -- common/autotest_common.sh@10 -- # set +x 00:12:59.485 01:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.485 01:20:51 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:59.485 01:20:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.485 01:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.485 01:20:51 -- common/autotest_common.sh@10 -- # set +x 00:12:59.742 01:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.742 01:20:51 -- target/connect_stress.sh@34 -- # kill -0 597123 00:12:59.742 01:20:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.742 01:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.742 01:20:51 -- common/autotest_common.sh@10 -- # set +x 00:13:00.309 01:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.309 01:20:51 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:00.309 01:20:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.309 01:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.309 01:20:51 -- common/autotest_common.sh@10 -- # set +x 00:13:00.567 01:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.567 01:20:52 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:00.567 01:20:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.567 01:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.567 01:20:52 -- common/autotest_common.sh@10 -- # set +x 00:13:00.825 01:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.825 01:20:52 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:00.825 01:20:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.825 01:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.825 01:20:52 -- common/autotest_common.sh@10 -- # set +x 00:13:01.084 01:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.084 01:20:52 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:01.084 01:20:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.084 01:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.084 01:20:52 -- common/autotest_common.sh@10 -- # set +x 00:13:01.344 01:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.344 01:20:53 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:01.344 01:20:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.344 01:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.344 01:20:53 -- common/autotest_common.sh@10 -- # set +x 00:13:01.920 01:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.920 01:20:53 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:01.920 01:20:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.920 01:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.920 01:20:53 -- common/autotest_common.sh@10 -- # set +x 00:13:02.179 01:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.179 01:20:53 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:02.179 01:20:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.179 01:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.179 01:20:53 -- common/autotest_common.sh@10 -- # set +x 00:13:02.439 01:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.439 01:20:54 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:02.439 01:20:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.439 01:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.439 01:20:54 -- common/autotest_common.sh@10 -- # set +x 00:13:02.699 01:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.699 01:20:54 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:02.699 01:20:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.699 01:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.699 01:20:54 -- common/autotest_common.sh@10 -- # set +x 00:13:02.957 01:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.958 01:20:54 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:02.958 01:20:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.958 01:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.958 01:20:54 -- common/autotest_common.sh@10 -- # set +x 00:13:03.526 01:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.526 01:20:54 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:03.526 01:20:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.526 01:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.526 01:20:54 -- common/autotest_common.sh@10 -- # set +x 00:13:03.785 01:20:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.785 01:20:55 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:03.785 01:20:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.785 01:20:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.785 01:20:55 -- common/autotest_common.sh@10 -- # set +x 00:13:04.045 01:20:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.045 01:20:55 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:04.045 01:20:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.045 01:20:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.045 01:20:55 -- common/autotest_common.sh@10 -- # set +x 00:13:04.304 01:20:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.304 01:20:55 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:04.304 01:20:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.304 01:20:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.304 01:20:55 -- common/autotest_common.sh@10 -- # set +x 00:13:04.564 01:20:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.564 01:20:56 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:04.564 01:20:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.565 01:20:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.565 01:20:56 -- common/autotest_common.sh@10 -- # set +x 00:13:05.133 01:20:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.133 01:20:56 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:05.133 01:20:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.134 01:20:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.134 01:20:56 -- common/autotest_common.sh@10 -- # set +x 00:13:05.394 01:20:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.394 01:20:56 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:05.394 01:20:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.394 01:20:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.394 01:20:56 -- common/autotest_common.sh@10 -- # set +x 00:13:05.652 01:20:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.652 01:20:57 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:05.652 01:20:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.652 01:20:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.652 01:20:57 -- common/autotest_common.sh@10 -- # set +x 00:13:05.912 01:20:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.912 01:20:57 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:05.912 01:20:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.912 01:20:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.912 01:20:57 -- common/autotest_common.sh@10 -- # set +x 00:13:06.173 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:06.173 01:20:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.173 01:20:57 -- target/connect_stress.sh@34 -- # kill -0 597123 00:13:06.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (597123) - No such process 00:13:06.173 01:20:57 -- target/connect_stress.sh@38 -- # wait 597123 00:13:06.173 01:20:57 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:06.173 01:20:57 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:06.173 01:20:57 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:06.173 01:20:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:06.173 01:20:57 -- nvmf/common.sh@116 -- # sync 00:13:06.173 01:20:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:06.173 01:20:57 -- nvmf/common.sh@119 -- # set +e 00:13:06.173 01:20:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:06.173 01:20:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:06.173 rmmod nvme_tcp 00:13:06.173 rmmod nvme_fabrics 00:13:06.173 rmmod nvme_keyring 00:13:06.173 01:20:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:06.173 01:20:57 -- nvmf/common.sh@123 -- # set -e 00:13:06.173 01:20:57 -- nvmf/common.sh@124 -- # return 0 00:13:06.173 01:20:57 -- nvmf/common.sh@477 -- # '[' -n 597046 ']' 00:13:06.173 01:20:57 -- nvmf/common.sh@478 -- # killprocess 597046 00:13:06.173 01:20:57 -- common/autotest_common.sh@926 -- # '[' -z 597046 ']' 00:13:06.173 01:20:57 -- common/autotest_common.sh@930 -- # kill -0 597046 00:13:06.434 01:20:57 -- common/autotest_common.sh@931 -- # uname 00:13:06.434 01:20:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:06.434 01:20:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 597046 00:13:06.434 01:20:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:06.434 01:20:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:06.434 01:20:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 597046' 00:13:06.434 killing process with pid 597046 00:13:06.434 01:20:57 -- common/autotest_common.sh@945 -- # kill 597046 00:13:06.434 01:20:57 -- common/autotest_common.sh@950 -- # wait 597046 00:13:06.695 01:20:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:06.695 01:20:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:06.695 01:20:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:06.695 01:20:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.695 01:20:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:06.695 01:20:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.695 01:20:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.695 01:20:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.598 01:21:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:08.598 00:13:08.598 real 0m15.967s 00:13:08.598 user 0m40.289s 00:13:08.598 sys 0m6.024s 00:13:08.598 01:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.598 01:21:00 -- common/autotest_common.sh@10 -- # set +x 00:13:08.598 ************************************ 00:13:08.598 END TEST nvmf_connect_stress 00:13:08.598 ************************************ 00:13:08.598 01:21:00 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:08.598 01:21:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:08.598 01:21:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.598 01:21:00 -- common/autotest_common.sh@10 -- # set +x 00:13:08.598 ************************************ 00:13:08.598 START TEST nvmf_fused_ordering 00:13:08.598 ************************************ 00:13:08.598 01:21:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:08.857 * Looking for test storage... 00:13:08.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.857 01:21:00 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.857 01:21:00 -- nvmf/common.sh@7 -- # uname -s 00:13:08.857 01:21:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.857 01:21:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.857 01:21:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.857 01:21:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.857 01:21:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.857 01:21:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.857 01:21:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.857 01:21:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.857 01:21:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.857 01:21:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.857 01:21:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.857 01:21:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.857 01:21:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.857 01:21:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.857 01:21:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.857 01:21:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.857 01:21:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.857 01:21:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.857 01:21:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.857 01:21:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.857 01:21:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.857 01:21:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.857 01:21:00 -- paths/export.sh@5 -- # export PATH 00:13:08.857 01:21:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.857 01:21:00 -- nvmf/common.sh@46 -- # : 0 00:13:08.857 01:21:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:08.857 01:21:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:08.857 01:21:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:08.857 01:21:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.857 01:21:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.857 01:21:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:08.857 01:21:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:08.857 01:21:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:08.857 01:21:00 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:08.857 01:21:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:08.857 01:21:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.857 01:21:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:08.857 01:21:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:08.857 01:21:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:08.857 01:21:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.857 01:21:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.858 01:21:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.858 01:21:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:08.858 01:21:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:08.858 01:21:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:08.858 01:21:00 -- common/autotest_common.sh@10 -- # set +x 00:13:10.763 01:21:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:10.763 01:21:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:10.763 01:21:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:10.763 01:21:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:10.763 01:21:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:10.763 01:21:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:10.763 01:21:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:10.763 01:21:02 -- nvmf/common.sh@294 -- # net_devs=() 00:13:10.763 01:21:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:10.763 01:21:02 -- nvmf/common.sh@295 -- # e810=() 00:13:10.763 01:21:02 -- nvmf/common.sh@295 -- # local -ga e810 00:13:10.763 01:21:02 -- nvmf/common.sh@296 -- # x722=() 00:13:10.763 01:21:02 -- nvmf/common.sh@296 -- # local -ga x722 00:13:10.763 01:21:02 -- nvmf/common.sh@297 -- # mlx=() 00:13:10.763 01:21:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:10.763 01:21:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.763 01:21:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:10.763 01:21:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:10.763 01:21:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:10.764 01:21:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:10.764 01:21:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:10.764 01:21:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:10.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:10.764 01:21:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:10.764 01:21:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:10.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:10.764 01:21:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:10.764 01:21:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:10.764 01:21:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.764 01:21:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:10.764 01:21:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.764 01:21:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:10.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:10.764 01:21:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.764 01:21:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:10.764 01:21:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.764 01:21:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:10.764 01:21:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.764 01:21:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:10.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:10.764 01:21:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.764 01:21:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:10.764 01:21:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:10.764 01:21:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:10.764 01:21:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.764 01:21:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.764 01:21:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.764 01:21:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:10.764 01:21:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.764 01:21:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.764 01:21:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:10.764 01:21:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.764 01:21:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.764 01:21:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:10.764 01:21:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:10.764 01:21:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.764 01:21:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.764 01:21:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.764 01:21:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.764 01:21:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:10.764 01:21:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.764 01:21:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.764 01:21:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.764 01:21:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:10.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:13:10.764 00:13:10.764 --- 10.0.0.2 ping statistics --- 00:13:10.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.764 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:13:10.764 01:21:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:13:10.764 00:13:10.764 --- 10.0.0.1 ping statistics --- 00:13:10.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.764 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:10.764 01:21:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.764 01:21:02 -- nvmf/common.sh@410 -- # return 0 00:13:10.764 01:21:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:10.764 01:21:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.764 01:21:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:10.764 01:21:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.764 01:21:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:10.764 01:21:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:10.764 01:21:02 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:10.764 01:21:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:10.764 01:21:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:10.764 01:21:02 -- common/autotest_common.sh@10 -- # set +x 00:13:10.764 01:21:02 -- nvmf/common.sh@469 -- # nvmfpid=600523 00:13:10.764 01:21:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:10.764 01:21:02 -- nvmf/common.sh@470 -- # waitforlisten 600523 00:13:10.764 01:21:02 -- common/autotest_common.sh@819 -- # '[' -z 600523 ']' 00:13:10.764 01:21:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.764 01:21:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:10.764 01:21:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.764 01:21:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:10.764 01:21:02 -- common/autotest_common.sh@10 -- # set +x 00:13:10.764 [2024-07-27 01:21:02.447464] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:10.764 [2024-07-27 01:21:02.447545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.764 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.764 [2024-07-27 01:21:02.516747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.024 [2024-07-27 01:21:02.625337] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:11.025 [2024-07-27 01:21:02.625529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.025 [2024-07-27 01:21:02.625548] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.025 [2024-07-27 01:21:02.625562] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.025 [2024-07-27 01:21:02.625590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.993 01:21:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:11.993 01:21:03 -- common/autotest_common.sh@852 -- # return 0 00:13:11.993 01:21:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:11.993 01:21:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:11.993 01:21:03 -- common/autotest_common.sh@10 -- # set +x 00:13:11.993 01:21:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.993 01:21:03 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.993 01:21:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.993 01:21:03 -- common/autotest_common.sh@10 -- # set +x 00:13:11.993 [2024-07-27 01:21:03.408303] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.993 01:21:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.993 01:21:03 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:11.993 01:21:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.993 01:21:03 -- common/autotest_common.sh@10 -- # set +x 00:13:11.994 01:21:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.994 01:21:03 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.994 01:21:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.994 01:21:03 -- common/autotest_common.sh@10 -- # set +x 00:13:11.994 [2024-07-27 01:21:03.424483] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.994 01:21:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.994 01:21:03 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:11.994 01:21:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.994 01:21:03 -- common/autotest_common.sh@10 -- # set +x 00:13:11.994 NULL1 00:13:11.994 01:21:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.994 01:21:03 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:11.994 01:21:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.994 01:21:03 -- common/autotest_common.sh@10 -- # set +x 00:13:11.994 01:21:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.994 01:21:03 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:11.994 01:21:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.994 01:21:03 -- common/autotest_common.sh@10 -- # set +x 00:13:11.994 01:21:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.994 01:21:03 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:11.994 [2024-07-27 01:21:03.469336] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:11.994 [2024-07-27 01:21:03.469397] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600623 ] 00:13:11.994 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.564 Attached to nqn.2016-06.io.spdk:cnode1 00:13:12.564 Namespace ID: 1 size: 1GB 00:13:12.564 fused_ordering(0) 00:13:12.564 fused_ordering(1) 00:13:12.564 fused_ordering(2) 00:13:12.564 fused_ordering(3) 00:13:12.564 fused_ordering(4) 00:13:12.564 fused_ordering(5) 00:13:12.564 fused_ordering(6) 00:13:12.564 fused_ordering(7) 00:13:12.564 fused_ordering(8) 00:13:12.564 fused_ordering(9) 00:13:12.564 fused_ordering(10) 00:13:12.564 fused_ordering(11) 00:13:12.564 fused_ordering(12) 00:13:12.564 fused_ordering(13) 00:13:12.564 fused_ordering(14) 00:13:12.564 fused_ordering(15) 00:13:12.564 fused_ordering(16) 00:13:12.564 fused_ordering(17) 00:13:12.564 fused_ordering(18) 00:13:12.564 fused_ordering(19) 00:13:12.564 fused_ordering(20) 00:13:12.564 fused_ordering(21) 00:13:12.564 fused_ordering(22) 00:13:12.564 fused_ordering(23) 00:13:12.564 fused_ordering(24) 00:13:12.564 fused_ordering(25) 00:13:12.564 fused_ordering(26) 00:13:12.564 fused_ordering(27) 00:13:12.564 fused_ordering(28) 00:13:12.564 fused_ordering(29) 00:13:12.564 fused_ordering(30) 00:13:12.564 fused_ordering(31) 00:13:12.564 fused_ordering(32) 00:13:12.564 fused_ordering(33) 00:13:12.564 fused_ordering(34) 00:13:12.564 fused_ordering(35) 00:13:12.564 fused_ordering(36) 00:13:12.564 fused_ordering(37) 00:13:12.564 fused_ordering(38) 00:13:12.564 fused_ordering(39) 00:13:12.564 fused_ordering(40) 00:13:12.564 fused_ordering(41) 00:13:12.564 fused_ordering(42) 00:13:12.564 fused_ordering(43) 00:13:12.564 fused_ordering(44) 00:13:12.564 fused_ordering(45) 00:13:12.564 fused_ordering(46) 00:13:12.564 fused_ordering(47) 00:13:12.564 fused_ordering(48) 00:13:12.564 fused_ordering(49) 00:13:12.564 fused_ordering(50) 00:13:12.564 fused_ordering(51) 00:13:12.564 fused_ordering(52) 00:13:12.564 fused_ordering(53) 00:13:12.564 fused_ordering(54) 00:13:12.564 fused_ordering(55) 00:13:12.564 fused_ordering(56) 00:13:12.564 fused_ordering(57) 00:13:12.564 fused_ordering(58) 00:13:12.564 fused_ordering(59) 00:13:12.564 fused_ordering(60) 00:13:12.564 fused_ordering(61) 00:13:12.564 fused_ordering(62) 00:13:12.564 fused_ordering(63) 00:13:12.564 fused_ordering(64) 00:13:12.564 fused_ordering(65) 00:13:12.564 fused_ordering(66) 00:13:12.564 fused_ordering(67) 00:13:12.564 fused_ordering(68) 00:13:12.564 fused_ordering(69) 00:13:12.564 fused_ordering(70) 00:13:12.564 fused_ordering(71) 00:13:12.564 fused_ordering(72) 00:13:12.564 fused_ordering(73) 00:13:12.564 fused_ordering(74) 00:13:12.564 fused_ordering(75) 00:13:12.564 fused_ordering(76) 00:13:12.564 fused_ordering(77) 00:13:12.564 fused_ordering(78) 00:13:12.564 fused_ordering(79) 00:13:12.564 fused_ordering(80) 00:13:12.564 fused_ordering(81) 00:13:12.564 fused_ordering(82) 00:13:12.564 fused_ordering(83) 00:13:12.564 fused_ordering(84) 00:13:12.564 fused_ordering(85) 00:13:12.564 fused_ordering(86) 00:13:12.564 fused_ordering(87) 00:13:12.564 fused_ordering(88) 00:13:12.564 fused_ordering(89) 00:13:12.564 fused_ordering(90) 00:13:12.564 fused_ordering(91) 00:13:12.564 fused_ordering(92) 00:13:12.564 fused_ordering(93) 00:13:12.564 fused_ordering(94) 00:13:12.564 fused_ordering(95) 00:13:12.564 fused_ordering(96) 00:13:12.564 fused_ordering(97) 00:13:12.564 fused_ordering(98) 00:13:12.564 fused_ordering(99) 00:13:12.564 fused_ordering(100) 00:13:12.564 fused_ordering(101) 00:13:12.564 fused_ordering(102) 00:13:12.564 fused_ordering(103) 00:13:12.564 fused_ordering(104) 00:13:12.564 fused_ordering(105) 00:13:12.564 fused_ordering(106) 00:13:12.564 fused_ordering(107) 00:13:12.564 fused_ordering(108) 00:13:12.564 fused_ordering(109) 00:13:12.564 fused_ordering(110) 00:13:12.564 fused_ordering(111) 00:13:12.564 fused_ordering(112) 00:13:12.564 fused_ordering(113) 00:13:12.564 fused_ordering(114) 00:13:12.564 fused_ordering(115) 00:13:12.564 fused_ordering(116) 00:13:12.564 fused_ordering(117) 00:13:12.564 fused_ordering(118) 00:13:12.564 fused_ordering(119) 00:13:12.564 fused_ordering(120) 00:13:12.564 fused_ordering(121) 00:13:12.564 fused_ordering(122) 00:13:12.564 fused_ordering(123) 00:13:12.564 fused_ordering(124) 00:13:12.564 fused_ordering(125) 00:13:12.564 fused_ordering(126) 00:13:12.564 fused_ordering(127) 00:13:12.564 fused_ordering(128) 00:13:12.564 fused_ordering(129) 00:13:12.564 fused_ordering(130) 00:13:12.565 fused_ordering(131) 00:13:12.565 fused_ordering(132) 00:13:12.565 fused_ordering(133) 00:13:12.565 fused_ordering(134) 00:13:12.565 fused_ordering(135) 00:13:12.565 fused_ordering(136) 00:13:12.565 fused_ordering(137) 00:13:12.565 fused_ordering(138) 00:13:12.565 fused_ordering(139) 00:13:12.565 fused_ordering(140) 00:13:12.565 fused_ordering(141) 00:13:12.565 fused_ordering(142) 00:13:12.565 fused_ordering(143) 00:13:12.565 fused_ordering(144) 00:13:12.565 fused_ordering(145) 00:13:12.565 fused_ordering(146) 00:13:12.565 fused_ordering(147) 00:13:12.565 fused_ordering(148) 00:13:12.565 fused_ordering(149) 00:13:12.565 fused_ordering(150) 00:13:12.565 fused_ordering(151) 00:13:12.565 fused_ordering(152) 00:13:12.565 fused_ordering(153) 00:13:12.565 fused_ordering(154) 00:13:12.565 fused_ordering(155) 00:13:12.565 fused_ordering(156) 00:13:12.565 fused_ordering(157) 00:13:12.565 fused_ordering(158) 00:13:12.565 fused_ordering(159) 00:13:12.565 fused_ordering(160) 00:13:12.565 fused_ordering(161) 00:13:12.565 fused_ordering(162) 00:13:12.565 fused_ordering(163) 00:13:12.565 fused_ordering(164) 00:13:12.565 fused_ordering(165) 00:13:12.565 fused_ordering(166) 00:13:12.565 fused_ordering(167) 00:13:12.565 fused_ordering(168) 00:13:12.565 fused_ordering(169) 00:13:12.565 fused_ordering(170) 00:13:12.565 fused_ordering(171) 00:13:12.565 fused_ordering(172) 00:13:12.565 fused_ordering(173) 00:13:12.565 fused_ordering(174) 00:13:12.565 fused_ordering(175) 00:13:12.565 fused_ordering(176) 00:13:12.565 fused_ordering(177) 00:13:12.565 fused_ordering(178) 00:13:12.565 fused_ordering(179) 00:13:12.565 fused_ordering(180) 00:13:12.565 fused_ordering(181) 00:13:12.565 fused_ordering(182) 00:13:12.565 fused_ordering(183) 00:13:12.565 fused_ordering(184) 00:13:12.565 fused_ordering(185) 00:13:12.565 fused_ordering(186) 00:13:12.565 fused_ordering(187) 00:13:12.565 fused_ordering(188) 00:13:12.565 fused_ordering(189) 00:13:12.565 fused_ordering(190) 00:13:12.565 fused_ordering(191) 00:13:12.565 fused_ordering(192) 00:13:12.565 fused_ordering(193) 00:13:12.565 fused_ordering(194) 00:13:12.565 fused_ordering(195) 00:13:12.565 fused_ordering(196) 00:13:12.565 fused_ordering(197) 00:13:12.565 fused_ordering(198) 00:13:12.565 fused_ordering(199) 00:13:12.565 fused_ordering(200) 00:13:12.565 fused_ordering(201) 00:13:12.565 fused_ordering(202) 00:13:12.565 fused_ordering(203) 00:13:12.565 fused_ordering(204) 00:13:12.565 fused_ordering(205) 00:13:13.133 fused_ordering(206) 00:13:13.133 fused_ordering(207) 00:13:13.133 fused_ordering(208) 00:13:13.133 fused_ordering(209) 00:13:13.133 fused_ordering(210) 00:13:13.133 fused_ordering(211) 00:13:13.133 fused_ordering(212) 00:13:13.133 fused_ordering(213) 00:13:13.133 fused_ordering(214) 00:13:13.133 fused_ordering(215) 00:13:13.133 fused_ordering(216) 00:13:13.133 fused_ordering(217) 00:13:13.133 fused_ordering(218) 00:13:13.133 fused_ordering(219) 00:13:13.133 fused_ordering(220) 00:13:13.133 fused_ordering(221) 00:13:13.133 fused_ordering(222) 00:13:13.133 fused_ordering(223) 00:13:13.133 fused_ordering(224) 00:13:13.133 fused_ordering(225) 00:13:13.133 fused_ordering(226) 00:13:13.133 fused_ordering(227) 00:13:13.133 fused_ordering(228) 00:13:13.133 fused_ordering(229) 00:13:13.133 fused_ordering(230) 00:13:13.133 fused_ordering(231) 00:13:13.133 fused_ordering(232) 00:13:13.133 fused_ordering(233) 00:13:13.133 fused_ordering(234) 00:13:13.133 fused_ordering(235) 00:13:13.133 fused_ordering(236) 00:13:13.133 fused_ordering(237) 00:13:13.133 fused_ordering(238) 00:13:13.133 fused_ordering(239) 00:13:13.133 fused_ordering(240) 00:13:13.133 fused_ordering(241) 00:13:13.133 fused_ordering(242) 00:13:13.134 fused_ordering(243) 00:13:13.134 fused_ordering(244) 00:13:13.134 fused_ordering(245) 00:13:13.134 fused_ordering(246) 00:13:13.134 fused_ordering(247) 00:13:13.134 fused_ordering(248) 00:13:13.134 fused_ordering(249) 00:13:13.134 fused_ordering(250) 00:13:13.134 fused_ordering(251) 00:13:13.134 fused_ordering(252) 00:13:13.134 fused_ordering(253) 00:13:13.134 fused_ordering(254) 00:13:13.134 fused_ordering(255) 00:13:13.134 fused_ordering(256) 00:13:13.134 fused_ordering(257) 00:13:13.134 fused_ordering(258) 00:13:13.134 fused_ordering(259) 00:13:13.134 fused_ordering(260) 00:13:13.134 fused_ordering(261) 00:13:13.134 fused_ordering(262) 00:13:13.134 fused_ordering(263) 00:13:13.134 fused_ordering(264) 00:13:13.134 fused_ordering(265) 00:13:13.134 fused_ordering(266) 00:13:13.134 fused_ordering(267) 00:13:13.134 fused_ordering(268) 00:13:13.134 fused_ordering(269) 00:13:13.134 fused_ordering(270) 00:13:13.134 fused_ordering(271) 00:13:13.134 fused_ordering(272) 00:13:13.134 fused_ordering(273) 00:13:13.134 fused_ordering(274) 00:13:13.134 fused_ordering(275) 00:13:13.134 fused_ordering(276) 00:13:13.134 fused_ordering(277) 00:13:13.134 fused_ordering(278) 00:13:13.134 fused_ordering(279) 00:13:13.134 fused_ordering(280) 00:13:13.134 fused_ordering(281) 00:13:13.134 fused_ordering(282) 00:13:13.134 fused_ordering(283) 00:13:13.134 fused_ordering(284) 00:13:13.134 fused_ordering(285) 00:13:13.134 fused_ordering(286) 00:13:13.134 fused_ordering(287) 00:13:13.134 fused_ordering(288) 00:13:13.134 fused_ordering(289) 00:13:13.134 fused_ordering(290) 00:13:13.134 fused_ordering(291) 00:13:13.134 fused_ordering(292) 00:13:13.134 fused_ordering(293) 00:13:13.134 fused_ordering(294) 00:13:13.134 fused_ordering(295) 00:13:13.134 fused_ordering(296) 00:13:13.134 fused_ordering(297) 00:13:13.134 fused_ordering(298) 00:13:13.134 fused_ordering(299) 00:13:13.134 fused_ordering(300) 00:13:13.134 fused_ordering(301) 00:13:13.134 fused_ordering(302) 00:13:13.134 fused_ordering(303) 00:13:13.134 fused_ordering(304) 00:13:13.134 fused_ordering(305) 00:13:13.134 fused_ordering(306) 00:13:13.134 fused_ordering(307) 00:13:13.134 fused_ordering(308) 00:13:13.134 fused_ordering(309) 00:13:13.134 fused_ordering(310) 00:13:13.134 fused_ordering(311) 00:13:13.134 fused_ordering(312) 00:13:13.134 fused_ordering(313) 00:13:13.134 fused_ordering(314) 00:13:13.134 fused_ordering(315) 00:13:13.134 fused_ordering(316) 00:13:13.134 fused_ordering(317) 00:13:13.134 fused_ordering(318) 00:13:13.134 fused_ordering(319) 00:13:13.134 fused_ordering(320) 00:13:13.134 fused_ordering(321) 00:13:13.134 fused_ordering(322) 00:13:13.134 fused_ordering(323) 00:13:13.134 fused_ordering(324) 00:13:13.134 fused_ordering(325) 00:13:13.134 fused_ordering(326) 00:13:13.134 fused_ordering(327) 00:13:13.134 fused_ordering(328) 00:13:13.134 fused_ordering(329) 00:13:13.134 fused_ordering(330) 00:13:13.134 fused_ordering(331) 00:13:13.134 fused_ordering(332) 00:13:13.134 fused_ordering(333) 00:13:13.134 fused_ordering(334) 00:13:13.134 fused_ordering(335) 00:13:13.134 fused_ordering(336) 00:13:13.134 fused_ordering(337) 00:13:13.134 fused_ordering(338) 00:13:13.134 fused_ordering(339) 00:13:13.134 fused_ordering(340) 00:13:13.134 fused_ordering(341) 00:13:13.134 fused_ordering(342) 00:13:13.134 fused_ordering(343) 00:13:13.134 fused_ordering(344) 00:13:13.134 fused_ordering(345) 00:13:13.134 fused_ordering(346) 00:13:13.134 fused_ordering(347) 00:13:13.134 fused_ordering(348) 00:13:13.134 fused_ordering(349) 00:13:13.134 fused_ordering(350) 00:13:13.134 fused_ordering(351) 00:13:13.134 fused_ordering(352) 00:13:13.134 fused_ordering(353) 00:13:13.134 fused_ordering(354) 00:13:13.134 fused_ordering(355) 00:13:13.134 fused_ordering(356) 00:13:13.134 fused_ordering(357) 00:13:13.134 fused_ordering(358) 00:13:13.134 fused_ordering(359) 00:13:13.134 fused_ordering(360) 00:13:13.134 fused_ordering(361) 00:13:13.134 fused_ordering(362) 00:13:13.134 fused_ordering(363) 00:13:13.134 fused_ordering(364) 00:13:13.134 fused_ordering(365) 00:13:13.134 fused_ordering(366) 00:13:13.134 fused_ordering(367) 00:13:13.134 fused_ordering(368) 00:13:13.134 fused_ordering(369) 00:13:13.134 fused_ordering(370) 00:13:13.134 fused_ordering(371) 00:13:13.134 fused_ordering(372) 00:13:13.134 fused_ordering(373) 00:13:13.134 fused_ordering(374) 00:13:13.134 fused_ordering(375) 00:13:13.134 fused_ordering(376) 00:13:13.134 fused_ordering(377) 00:13:13.134 fused_ordering(378) 00:13:13.134 fused_ordering(379) 00:13:13.134 fused_ordering(380) 00:13:13.134 fused_ordering(381) 00:13:13.134 fused_ordering(382) 00:13:13.134 fused_ordering(383) 00:13:13.134 fused_ordering(384) 00:13:13.134 fused_ordering(385) 00:13:13.134 fused_ordering(386) 00:13:13.134 fused_ordering(387) 00:13:13.134 fused_ordering(388) 00:13:13.134 fused_ordering(389) 00:13:13.134 fused_ordering(390) 00:13:13.134 fused_ordering(391) 00:13:13.134 fused_ordering(392) 00:13:13.134 fused_ordering(393) 00:13:13.134 fused_ordering(394) 00:13:13.134 fused_ordering(395) 00:13:13.134 fused_ordering(396) 00:13:13.134 fused_ordering(397) 00:13:13.134 fused_ordering(398) 00:13:13.134 fused_ordering(399) 00:13:13.134 fused_ordering(400) 00:13:13.134 fused_ordering(401) 00:13:13.134 fused_ordering(402) 00:13:13.134 fused_ordering(403) 00:13:13.134 fused_ordering(404) 00:13:13.134 fused_ordering(405) 00:13:13.134 fused_ordering(406) 00:13:13.134 fused_ordering(407) 00:13:13.134 fused_ordering(408) 00:13:13.134 fused_ordering(409) 00:13:13.134 fused_ordering(410) 00:13:14.071 fused_ordering(411) 00:13:14.071 fused_ordering(412) 00:13:14.071 fused_ordering(413) 00:13:14.071 fused_ordering(414) 00:13:14.071 fused_ordering(415) 00:13:14.071 fused_ordering(416) 00:13:14.071 fused_ordering(417) 00:13:14.071 fused_ordering(418) 00:13:14.071 fused_ordering(419) 00:13:14.071 fused_ordering(420) 00:13:14.071 fused_ordering(421) 00:13:14.071 fused_ordering(422) 00:13:14.071 fused_ordering(423) 00:13:14.071 fused_ordering(424) 00:13:14.071 fused_ordering(425) 00:13:14.071 fused_ordering(426) 00:13:14.071 fused_ordering(427) 00:13:14.071 fused_ordering(428) 00:13:14.071 fused_ordering(429) 00:13:14.071 fused_ordering(430) 00:13:14.071 fused_ordering(431) 00:13:14.071 fused_ordering(432) 00:13:14.071 fused_ordering(433) 00:13:14.071 fused_ordering(434) 00:13:14.071 fused_ordering(435) 00:13:14.071 fused_ordering(436) 00:13:14.071 fused_ordering(437) 00:13:14.071 fused_ordering(438) 00:13:14.071 fused_ordering(439) 00:13:14.071 fused_ordering(440) 00:13:14.071 fused_ordering(441) 00:13:14.071 fused_ordering(442) 00:13:14.071 fused_ordering(443) 00:13:14.071 fused_ordering(444) 00:13:14.071 fused_ordering(445) 00:13:14.071 fused_ordering(446) 00:13:14.071 fused_ordering(447) 00:13:14.071 fused_ordering(448) 00:13:14.071 fused_ordering(449) 00:13:14.071 fused_ordering(450) 00:13:14.071 fused_ordering(451) 00:13:14.071 fused_ordering(452) 00:13:14.071 fused_ordering(453) 00:13:14.071 fused_ordering(454) 00:13:14.071 fused_ordering(455) 00:13:14.071 fused_ordering(456) 00:13:14.071 fused_ordering(457) 00:13:14.071 fused_ordering(458) 00:13:14.071 fused_ordering(459) 00:13:14.071 fused_ordering(460) 00:13:14.071 fused_ordering(461) 00:13:14.071 fused_ordering(462) 00:13:14.071 fused_ordering(463) 00:13:14.071 fused_ordering(464) 00:13:14.071 fused_ordering(465) 00:13:14.071 fused_ordering(466) 00:13:14.071 fused_ordering(467) 00:13:14.071 fused_ordering(468) 00:13:14.071 fused_ordering(469) 00:13:14.071 fused_ordering(470) 00:13:14.071 fused_ordering(471) 00:13:14.071 fused_ordering(472) 00:13:14.071 fused_ordering(473) 00:13:14.071 fused_ordering(474) 00:13:14.071 fused_ordering(475) 00:13:14.071 fused_ordering(476) 00:13:14.071 fused_ordering(477) 00:13:14.071 fused_ordering(478) 00:13:14.071 fused_ordering(479) 00:13:14.071 fused_ordering(480) 00:13:14.071 fused_ordering(481) 00:13:14.071 fused_ordering(482) 00:13:14.071 fused_ordering(483) 00:13:14.071 fused_ordering(484) 00:13:14.071 fused_ordering(485) 00:13:14.071 fused_ordering(486) 00:13:14.071 fused_ordering(487) 00:13:14.071 fused_ordering(488) 00:13:14.071 fused_ordering(489) 00:13:14.071 fused_ordering(490) 00:13:14.071 fused_ordering(491) 00:13:14.071 fused_ordering(492) 00:13:14.071 fused_ordering(493) 00:13:14.071 fused_ordering(494) 00:13:14.071 fused_ordering(495) 00:13:14.071 fused_ordering(496) 00:13:14.071 fused_ordering(497) 00:13:14.071 fused_ordering(498) 00:13:14.071 fused_ordering(499) 00:13:14.071 fused_ordering(500) 00:13:14.071 fused_ordering(501) 00:13:14.071 fused_ordering(502) 00:13:14.071 fused_ordering(503) 00:13:14.071 fused_ordering(504) 00:13:14.071 fused_ordering(505) 00:13:14.071 fused_ordering(506) 00:13:14.071 fused_ordering(507) 00:13:14.071 fused_ordering(508) 00:13:14.071 fused_ordering(509) 00:13:14.071 fused_ordering(510) 00:13:14.071 fused_ordering(511) 00:13:14.071 fused_ordering(512) 00:13:14.071 fused_ordering(513) 00:13:14.071 fused_ordering(514) 00:13:14.071 fused_ordering(515) 00:13:14.071 fused_ordering(516) 00:13:14.071 fused_ordering(517) 00:13:14.071 fused_ordering(518) 00:13:14.071 fused_ordering(519) 00:13:14.071 fused_ordering(520) 00:13:14.071 fused_ordering(521) 00:13:14.071 fused_ordering(522) 00:13:14.071 fused_ordering(523) 00:13:14.071 fused_ordering(524) 00:13:14.071 fused_ordering(525) 00:13:14.071 fused_ordering(526) 00:13:14.071 fused_ordering(527) 00:13:14.071 fused_ordering(528) 00:13:14.071 fused_ordering(529) 00:13:14.071 fused_ordering(530) 00:13:14.071 fused_ordering(531) 00:13:14.071 fused_ordering(532) 00:13:14.071 fused_ordering(533) 00:13:14.071 fused_ordering(534) 00:13:14.071 fused_ordering(535) 00:13:14.071 fused_ordering(536) 00:13:14.071 fused_ordering(537) 00:13:14.071 fused_ordering(538) 00:13:14.071 fused_ordering(539) 00:13:14.071 fused_ordering(540) 00:13:14.071 fused_ordering(541) 00:13:14.071 fused_ordering(542) 00:13:14.071 fused_ordering(543) 00:13:14.071 fused_ordering(544) 00:13:14.071 fused_ordering(545) 00:13:14.071 fused_ordering(546) 00:13:14.071 fused_ordering(547) 00:13:14.071 fused_ordering(548) 00:13:14.071 fused_ordering(549) 00:13:14.071 fused_ordering(550) 00:13:14.071 fused_ordering(551) 00:13:14.071 fused_ordering(552) 00:13:14.071 fused_ordering(553) 00:13:14.071 fused_ordering(554) 00:13:14.071 fused_ordering(555) 00:13:14.071 fused_ordering(556) 00:13:14.071 fused_ordering(557) 00:13:14.071 fused_ordering(558) 00:13:14.071 fused_ordering(559) 00:13:14.071 fused_ordering(560) 00:13:14.071 fused_ordering(561) 00:13:14.071 fused_ordering(562) 00:13:14.071 fused_ordering(563) 00:13:14.071 fused_ordering(564) 00:13:14.071 fused_ordering(565) 00:13:14.071 fused_ordering(566) 00:13:14.071 fused_ordering(567) 00:13:14.071 fused_ordering(568) 00:13:14.071 fused_ordering(569) 00:13:14.071 fused_ordering(570) 00:13:14.071 fused_ordering(571) 00:13:14.071 fused_ordering(572) 00:13:14.071 fused_ordering(573) 00:13:14.071 fused_ordering(574) 00:13:14.071 fused_ordering(575) 00:13:14.071 fused_ordering(576) 00:13:14.071 fused_ordering(577) 00:13:14.071 fused_ordering(578) 00:13:14.071 fused_ordering(579) 00:13:14.071 fused_ordering(580) 00:13:14.071 fused_ordering(581) 00:13:14.071 fused_ordering(582) 00:13:14.071 fused_ordering(583) 00:13:14.071 fused_ordering(584) 00:13:14.071 fused_ordering(585) 00:13:14.071 fused_ordering(586) 00:13:14.071 fused_ordering(587) 00:13:14.071 fused_ordering(588) 00:13:14.071 fused_ordering(589) 00:13:14.071 fused_ordering(590) 00:13:14.071 fused_ordering(591) 00:13:14.071 fused_ordering(592) 00:13:14.071 fused_ordering(593) 00:13:14.071 fused_ordering(594) 00:13:14.071 fused_ordering(595) 00:13:14.071 fused_ordering(596) 00:13:14.071 fused_ordering(597) 00:13:14.071 fused_ordering(598) 00:13:14.071 fused_ordering(599) 00:13:14.071 fused_ordering(600) 00:13:14.071 fused_ordering(601) 00:13:14.071 fused_ordering(602) 00:13:14.071 fused_ordering(603) 00:13:14.071 fused_ordering(604) 00:13:14.071 fused_ordering(605) 00:13:14.071 fused_ordering(606) 00:13:14.071 fused_ordering(607) 00:13:14.071 fused_ordering(608) 00:13:14.071 fused_ordering(609) 00:13:14.072 fused_ordering(610) 00:13:14.072 fused_ordering(611) 00:13:14.072 fused_ordering(612) 00:13:14.072 fused_ordering(613) 00:13:14.072 fused_ordering(614) 00:13:14.072 fused_ordering(615) 00:13:14.639 fused_ordering(616) 00:13:14.639 fused_ordering(617) 00:13:14.639 fused_ordering(618) 00:13:14.639 fused_ordering(619) 00:13:14.639 fused_ordering(620) 00:13:14.639 fused_ordering(621) 00:13:14.639 fused_ordering(622) 00:13:14.639 fused_ordering(623) 00:13:14.639 fused_ordering(624) 00:13:14.639 fused_ordering(625) 00:13:14.639 fused_ordering(626) 00:13:14.639 fused_ordering(627) 00:13:14.639 fused_ordering(628) 00:13:14.639 fused_ordering(629) 00:13:14.639 fused_ordering(630) 00:13:14.639 fused_ordering(631) 00:13:14.639 fused_ordering(632) 00:13:14.639 fused_ordering(633) 00:13:14.639 fused_ordering(634) 00:13:14.639 fused_ordering(635) 00:13:14.639 fused_ordering(636) 00:13:14.639 fused_ordering(637) 00:13:14.639 fused_ordering(638) 00:13:14.639 fused_ordering(639) 00:13:14.639 fused_ordering(640) 00:13:14.639 fused_ordering(641) 00:13:14.639 fused_ordering(642) 00:13:14.639 fused_ordering(643) 00:13:14.639 fused_ordering(644) 00:13:14.639 fused_ordering(645) 00:13:14.639 fused_ordering(646) 00:13:14.639 fused_ordering(647) 00:13:14.639 fused_ordering(648) 00:13:14.639 fused_ordering(649) 00:13:14.639 fused_ordering(650) 00:13:14.639 fused_ordering(651) 00:13:14.639 fused_ordering(652) 00:13:14.639 fused_ordering(653) 00:13:14.639 fused_ordering(654) 00:13:14.639 fused_ordering(655) 00:13:14.639 fused_ordering(656) 00:13:14.639 fused_ordering(657) 00:13:14.639 fused_ordering(658) 00:13:14.639 fused_ordering(659) 00:13:14.639 fused_ordering(660) 00:13:14.639 fused_ordering(661) 00:13:14.639 fused_ordering(662) 00:13:14.639 fused_ordering(663) 00:13:14.639 fused_ordering(664) 00:13:14.639 fused_ordering(665) 00:13:14.639 fused_ordering(666) 00:13:14.639 fused_ordering(667) 00:13:14.639 fused_ordering(668) 00:13:14.639 fused_ordering(669) 00:13:14.640 fused_ordering(670) 00:13:14.640 fused_ordering(671) 00:13:14.640 fused_ordering(672) 00:13:14.640 fused_ordering(673) 00:13:14.640 fused_ordering(674) 00:13:14.640 fused_ordering(675) 00:13:14.640 fused_ordering(676) 00:13:14.640 fused_ordering(677) 00:13:14.640 fused_ordering(678) 00:13:14.640 fused_ordering(679) 00:13:14.640 fused_ordering(680) 00:13:14.640 fused_ordering(681) 00:13:14.640 fused_ordering(682) 00:13:14.640 fused_ordering(683) 00:13:14.640 fused_ordering(684) 00:13:14.640 fused_ordering(685) 00:13:14.640 fused_ordering(686) 00:13:14.640 fused_ordering(687) 00:13:14.640 fused_ordering(688) 00:13:14.640 fused_ordering(689) 00:13:14.640 fused_ordering(690) 00:13:14.640 fused_ordering(691) 00:13:14.640 fused_ordering(692) 00:13:14.640 fused_ordering(693) 00:13:14.640 fused_ordering(694) 00:13:14.640 fused_ordering(695) 00:13:14.640 fused_ordering(696) 00:13:14.640 fused_ordering(697) 00:13:14.640 fused_ordering(698) 00:13:14.640 fused_ordering(699) 00:13:14.640 fused_ordering(700) 00:13:14.640 fused_ordering(701) 00:13:14.640 fused_ordering(702) 00:13:14.640 fused_ordering(703) 00:13:14.640 fused_ordering(704) 00:13:14.640 fused_ordering(705) 00:13:14.640 fused_ordering(706) 00:13:14.640 fused_ordering(707) 00:13:14.640 fused_ordering(708) 00:13:14.640 fused_ordering(709) 00:13:14.640 fused_ordering(710) 00:13:14.640 fused_ordering(711) 00:13:14.640 fused_ordering(712) 00:13:14.640 fused_ordering(713) 00:13:14.640 fused_ordering(714) 00:13:14.640 fused_ordering(715) 00:13:14.640 fused_ordering(716) 00:13:14.640 fused_ordering(717) 00:13:14.640 fused_ordering(718) 00:13:14.640 fused_ordering(719) 00:13:14.640 fused_ordering(720) 00:13:14.640 fused_ordering(721) 00:13:14.640 fused_ordering(722) 00:13:14.640 fused_ordering(723) 00:13:14.640 fused_ordering(724) 00:13:14.640 fused_ordering(725) 00:13:14.640 fused_ordering(726) 00:13:14.640 fused_ordering(727) 00:13:14.640 fused_ordering(728) 00:13:14.640 fused_ordering(729) 00:13:14.640 fused_ordering(730) 00:13:14.640 fused_ordering(731) 00:13:14.640 fused_ordering(732) 00:13:14.640 fused_ordering(733) 00:13:14.640 fused_ordering(734) 00:13:14.640 fused_ordering(735) 00:13:14.640 fused_ordering(736) 00:13:14.640 fused_ordering(737) 00:13:14.640 fused_ordering(738) 00:13:14.640 fused_ordering(739) 00:13:14.640 fused_ordering(740) 00:13:14.640 fused_ordering(741) 00:13:14.640 fused_ordering(742) 00:13:14.640 fused_ordering(743) 00:13:14.640 fused_ordering(744) 00:13:14.640 fused_ordering(745) 00:13:14.640 fused_ordering(746) 00:13:14.640 fused_ordering(747) 00:13:14.640 fused_ordering(748) 00:13:14.640 fused_ordering(749) 00:13:14.640 fused_ordering(750) 00:13:14.640 fused_ordering(751) 00:13:14.640 fused_ordering(752) 00:13:14.640 fused_ordering(753) 00:13:14.640 fused_ordering(754) 00:13:14.640 fused_ordering(755) 00:13:14.640 fused_ordering(756) 00:13:14.640 fused_ordering(757) 00:13:14.640 fused_ordering(758) 00:13:14.640 fused_ordering(759) 00:13:14.640 fused_ordering(760) 00:13:14.640 fused_ordering(761) 00:13:14.640 fused_ordering(762) 00:13:14.640 fused_ordering(763) 00:13:14.640 fused_ordering(764) 00:13:14.640 fused_ordering(765) 00:13:14.640 fused_ordering(766) 00:13:14.640 fused_ordering(767) 00:13:14.640 fused_ordering(768) 00:13:14.640 fused_ordering(769) 00:13:14.640 fused_ordering(770) 00:13:14.640 fused_ordering(771) 00:13:14.640 fused_ordering(772) 00:13:14.640 fused_ordering(773) 00:13:14.640 fused_ordering(774) 00:13:14.640 fused_ordering(775) 00:13:14.640 fused_ordering(776) 00:13:14.640 fused_ordering(777) 00:13:14.640 fused_ordering(778) 00:13:14.640 fused_ordering(779) 00:13:14.640 fused_ordering(780) 00:13:14.640 fused_ordering(781) 00:13:14.640 fused_ordering(782) 00:13:14.640 fused_ordering(783) 00:13:14.640 fused_ordering(784) 00:13:14.640 fused_ordering(785) 00:13:14.640 fused_ordering(786) 00:13:14.640 fused_ordering(787) 00:13:14.640 fused_ordering(788) 00:13:14.640 fused_ordering(789) 00:13:14.640 fused_ordering(790) 00:13:14.640 fused_ordering(791) 00:13:14.640 fused_ordering(792) 00:13:14.640 fused_ordering(793) 00:13:14.640 fused_ordering(794) 00:13:14.640 fused_ordering(795) 00:13:14.640 fused_ordering(796) 00:13:14.640 fused_ordering(797) 00:13:14.640 fused_ordering(798) 00:13:14.640 fused_ordering(799) 00:13:14.640 fused_ordering(800) 00:13:14.640 fused_ordering(801) 00:13:14.640 fused_ordering(802) 00:13:14.640 fused_ordering(803) 00:13:14.640 fused_ordering(804) 00:13:14.640 fused_ordering(805) 00:13:14.640 fused_ordering(806) 00:13:14.640 fused_ordering(807) 00:13:14.640 fused_ordering(808) 00:13:14.640 fused_ordering(809) 00:13:14.640 fused_ordering(810) 00:13:14.640 fused_ordering(811) 00:13:14.640 fused_ordering(812) 00:13:14.640 fused_ordering(813) 00:13:14.640 fused_ordering(814) 00:13:14.640 fused_ordering(815) 00:13:14.640 fused_ordering(816) 00:13:14.640 fused_ordering(817) 00:13:14.640 fused_ordering(818) 00:13:14.640 fused_ordering(819) 00:13:14.640 fused_ordering(820) 00:13:15.577 fused_ordering(821) 00:13:15.577 fused_ordering(822) 00:13:15.577 fused_ordering(823) 00:13:15.577 fused_ordering(824) 00:13:15.577 fused_ordering(825) 00:13:15.577 fused_ordering(826) 00:13:15.577 fused_ordering(827) 00:13:15.577 fused_ordering(828) 00:13:15.577 fused_ordering(829) 00:13:15.577 fused_ordering(830) 00:13:15.577 fused_ordering(831) 00:13:15.577 fused_ordering(832) 00:13:15.577 fused_ordering(833) 00:13:15.577 fused_ordering(834) 00:13:15.577 fused_ordering(835) 00:13:15.577 fused_ordering(836) 00:13:15.577 fused_ordering(837) 00:13:15.577 fused_ordering(838) 00:13:15.577 fused_ordering(839) 00:13:15.577 fused_ordering(840) 00:13:15.577 fused_ordering(841) 00:13:15.577 fused_ordering(842) 00:13:15.577 fused_ordering(843) 00:13:15.577 fused_ordering(844) 00:13:15.577 fused_ordering(845) 00:13:15.577 fused_ordering(846) 00:13:15.577 fused_ordering(847) 00:13:15.577 fused_ordering(848) 00:13:15.577 fused_ordering(849) 00:13:15.577 fused_ordering(850) 00:13:15.577 fused_ordering(851) 00:13:15.577 fused_ordering(852) 00:13:15.577 fused_ordering(853) 00:13:15.577 fused_ordering(854) 00:13:15.577 fused_ordering(855) 00:13:15.577 fused_ordering(856) 00:13:15.577 fused_ordering(857) 00:13:15.577 fused_ordering(858) 00:13:15.577 fused_ordering(859) 00:13:15.577 fused_ordering(860) 00:13:15.577 fused_ordering(861) 00:13:15.577 fused_ordering(862) 00:13:15.577 fused_ordering(863) 00:13:15.577 fused_ordering(864) 00:13:15.577 fused_ordering(865) 00:13:15.577 fused_ordering(866) 00:13:15.577 fused_ordering(867) 00:13:15.577 fused_ordering(868) 00:13:15.577 fused_ordering(869) 00:13:15.577 fused_ordering(870) 00:13:15.577 fused_ordering(871) 00:13:15.577 fused_ordering(872) 00:13:15.577 fused_ordering(873) 00:13:15.577 fused_ordering(874) 00:13:15.577 fused_ordering(875) 00:13:15.577 fused_ordering(876) 00:13:15.577 fused_ordering(877) 00:13:15.577 fused_ordering(878) 00:13:15.577 fused_ordering(879) 00:13:15.577 fused_ordering(880) 00:13:15.577 fused_ordering(881) 00:13:15.577 fused_ordering(882) 00:13:15.577 fused_ordering(883) 00:13:15.577 fused_ordering(884) 00:13:15.577 fused_ordering(885) 00:13:15.577 fused_ordering(886) 00:13:15.577 fused_ordering(887) 00:13:15.577 fused_ordering(888) 00:13:15.577 fused_ordering(889) 00:13:15.577 fused_ordering(890) 00:13:15.577 fused_ordering(891) 00:13:15.577 fused_ordering(892) 00:13:15.577 fused_ordering(893) 00:13:15.577 fused_ordering(894) 00:13:15.577 fused_ordering(895) 00:13:15.577 fused_ordering(896) 00:13:15.577 fused_ordering(897) 00:13:15.577 fused_ordering(898) 00:13:15.577 fused_ordering(899) 00:13:15.577 fused_ordering(900) 00:13:15.577 fused_ordering(901) 00:13:15.577 fused_ordering(902) 00:13:15.577 fused_ordering(903) 00:13:15.577 fused_ordering(904) 00:13:15.577 fused_ordering(905) 00:13:15.577 fused_ordering(906) 00:13:15.577 fused_ordering(907) 00:13:15.577 fused_ordering(908) 00:13:15.577 fused_ordering(909) 00:13:15.577 fused_ordering(910) 00:13:15.577 fused_ordering(911) 00:13:15.577 fused_ordering(912) 00:13:15.577 fused_ordering(913) 00:13:15.577 fused_ordering(914) 00:13:15.577 fused_ordering(915) 00:13:15.577 fused_ordering(916) 00:13:15.577 fused_ordering(917) 00:13:15.577 fused_ordering(918) 00:13:15.577 fused_ordering(919) 00:13:15.577 fused_ordering(920) 00:13:15.577 fused_ordering(921) 00:13:15.577 fused_ordering(922) 00:13:15.577 fused_ordering(923) 00:13:15.577 fused_ordering(924) 00:13:15.577 fused_ordering(925) 00:13:15.577 fused_ordering(926) 00:13:15.577 fused_ordering(927) 00:13:15.577 fused_ordering(928) 00:13:15.577 fused_ordering(929) 00:13:15.577 fused_ordering(930) 00:13:15.577 fused_ordering(931) 00:13:15.577 fused_ordering(932) 00:13:15.577 fused_ordering(933) 00:13:15.577 fused_ordering(934) 00:13:15.577 fused_ordering(935) 00:13:15.577 fused_ordering(936) 00:13:15.577 fused_ordering(937) 00:13:15.577 fused_ordering(938) 00:13:15.577 fused_ordering(939) 00:13:15.577 fused_ordering(940) 00:13:15.577 fused_ordering(941) 00:13:15.577 fused_ordering(942) 00:13:15.577 fused_ordering(943) 00:13:15.577 fused_ordering(944) 00:13:15.577 fused_ordering(945) 00:13:15.577 fused_ordering(946) 00:13:15.577 fused_ordering(947) 00:13:15.577 fused_ordering(948) 00:13:15.577 fused_ordering(949) 00:13:15.577 fused_ordering(950) 00:13:15.577 fused_ordering(951) 00:13:15.577 fused_ordering(952) 00:13:15.577 fused_ordering(953) 00:13:15.577 fused_ordering(954) 00:13:15.577 fused_ordering(955) 00:13:15.577 fused_ordering(956) 00:13:15.577 fused_ordering(957) 00:13:15.577 fused_ordering(958) 00:13:15.577 fused_ordering(959) 00:13:15.577 fused_ordering(960) 00:13:15.577 fused_ordering(961) 00:13:15.577 fused_ordering(962) 00:13:15.577 fused_ordering(963) 00:13:15.577 fused_ordering(964) 00:13:15.577 fused_ordering(965) 00:13:15.577 fused_ordering(966) 00:13:15.577 fused_ordering(967) 00:13:15.577 fused_ordering(968) 00:13:15.577 fused_ordering(969) 00:13:15.577 fused_ordering(970) 00:13:15.577 fused_ordering(971) 00:13:15.577 fused_ordering(972) 00:13:15.577 fused_ordering(973) 00:13:15.577 fused_ordering(974) 00:13:15.577 fused_ordering(975) 00:13:15.577 fused_ordering(976) 00:13:15.577 fused_ordering(977) 00:13:15.577 fused_ordering(978) 00:13:15.577 fused_ordering(979) 00:13:15.577 fused_ordering(980) 00:13:15.577 fused_ordering(981) 00:13:15.577 fused_ordering(982) 00:13:15.577 fused_ordering(983) 00:13:15.577 fused_ordering(984) 00:13:15.577 fused_ordering(985) 00:13:15.577 fused_ordering(986) 00:13:15.577 fused_ordering(987) 00:13:15.577 fused_ordering(988) 00:13:15.577 fused_ordering(989) 00:13:15.578 fused_ordering(990) 00:13:15.578 fused_ordering(991) 00:13:15.578 fused_ordering(992) 00:13:15.578 fused_ordering(993) 00:13:15.578 fused_ordering(994) 00:13:15.578 fused_ordering(995) 00:13:15.578 fused_ordering(996) 00:13:15.578 fused_ordering(997) 00:13:15.578 fused_ordering(998) 00:13:15.578 fused_ordering(999) 00:13:15.578 fused_ordering(1000) 00:13:15.578 fused_ordering(1001) 00:13:15.578 fused_ordering(1002) 00:13:15.578 fused_ordering(1003) 00:13:15.578 fused_ordering(1004) 00:13:15.578 fused_ordering(1005) 00:13:15.578 fused_ordering(1006) 00:13:15.578 fused_ordering(1007) 00:13:15.578 fused_ordering(1008) 00:13:15.578 fused_ordering(1009) 00:13:15.578 fused_ordering(1010) 00:13:15.578 fused_ordering(1011) 00:13:15.578 fused_ordering(1012) 00:13:15.578 fused_ordering(1013) 00:13:15.578 fused_ordering(1014) 00:13:15.578 fused_ordering(1015) 00:13:15.578 fused_ordering(1016) 00:13:15.578 fused_ordering(1017) 00:13:15.578 fused_ordering(1018) 00:13:15.578 fused_ordering(1019) 00:13:15.578 fused_ordering(1020) 00:13:15.578 fused_ordering(1021) 00:13:15.578 fused_ordering(1022) 00:13:15.578 fused_ordering(1023) 00:13:15.578 01:21:07 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:15.578 01:21:07 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:15.578 01:21:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:15.578 01:21:07 -- nvmf/common.sh@116 -- # sync 00:13:15.578 01:21:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:15.578 01:21:07 -- nvmf/common.sh@119 -- # set +e 00:13:15.578 01:21:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:15.578 01:21:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:15.578 rmmod nvme_tcp 00:13:15.578 rmmod nvme_fabrics 00:13:15.578 rmmod nvme_keyring 00:13:15.578 01:21:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:15.578 01:21:07 -- nvmf/common.sh@123 -- # set -e 00:13:15.578 01:21:07 -- nvmf/common.sh@124 -- # return 0 00:13:15.578 01:21:07 -- nvmf/common.sh@477 -- # '[' -n 600523 ']' 00:13:15.578 01:21:07 -- nvmf/common.sh@478 -- # killprocess 600523 00:13:15.578 01:21:07 -- common/autotest_common.sh@926 -- # '[' -z 600523 ']' 00:13:15.578 01:21:07 -- common/autotest_common.sh@930 -- # kill -0 600523 00:13:15.578 01:21:07 -- common/autotest_common.sh@931 -- # uname 00:13:15.578 01:21:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:15.578 01:21:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 600523 00:13:15.578 01:21:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:15.578 01:21:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:15.578 01:21:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 600523' 00:13:15.578 killing process with pid 600523 00:13:15.578 01:21:07 -- common/autotest_common.sh@945 -- # kill 600523 00:13:15.578 01:21:07 -- common/autotest_common.sh@950 -- # wait 600523 00:13:15.837 01:21:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:15.837 01:21:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:15.837 01:21:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:15.837 01:21:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.837 01:21:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:15.837 01:21:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.837 01:21:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.837 01:21:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.375 01:21:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:18.375 00:13:18.375 real 0m9.194s 00:13:18.375 user 0m7.501s 00:13:18.375 sys 0m3.791s 00:13:18.375 01:21:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.375 01:21:09 -- common/autotest_common.sh@10 -- # set +x 00:13:18.375 ************************************ 00:13:18.375 END TEST nvmf_fused_ordering 00:13:18.375 ************************************ 00:13:18.375 01:21:09 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:18.375 01:21:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:18.375 01:21:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:18.375 01:21:09 -- common/autotest_common.sh@10 -- # set +x 00:13:18.375 ************************************ 00:13:18.375 START TEST nvmf_delete_subsystem 00:13:18.375 ************************************ 00:13:18.375 01:21:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:18.375 * Looking for test storage... 00:13:18.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.375 01:21:09 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.375 01:21:09 -- nvmf/common.sh@7 -- # uname -s 00:13:18.375 01:21:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.375 01:21:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.375 01:21:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.375 01:21:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.375 01:21:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.375 01:21:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.375 01:21:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.375 01:21:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.375 01:21:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.375 01:21:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.375 01:21:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.375 01:21:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.375 01:21:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.375 01:21:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.375 01:21:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.375 01:21:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.375 01:21:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.375 01:21:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.375 01:21:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.375 01:21:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.375 01:21:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.375 01:21:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.375 01:21:09 -- paths/export.sh@5 -- # export PATH 00:13:18.375 01:21:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.375 01:21:09 -- nvmf/common.sh@46 -- # : 0 00:13:18.375 01:21:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:18.375 01:21:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:18.375 01:21:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:18.375 01:21:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.375 01:21:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.375 01:21:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:18.375 01:21:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:18.375 01:21:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:18.375 01:21:09 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:18.375 01:21:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:18.375 01:21:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.375 01:21:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:18.375 01:21:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:18.375 01:21:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:18.375 01:21:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.375 01:21:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.375 01:21:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.375 01:21:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:18.375 01:21:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:18.375 01:21:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:18.375 01:21:09 -- common/autotest_common.sh@10 -- # set +x 00:13:20.282 01:21:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:20.282 01:21:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:20.282 01:21:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:20.282 01:21:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:20.282 01:21:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:20.282 01:21:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:20.282 01:21:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:20.282 01:21:11 -- nvmf/common.sh@294 -- # net_devs=() 00:13:20.282 01:21:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:20.282 01:21:11 -- nvmf/common.sh@295 -- # e810=() 00:13:20.282 01:21:11 -- nvmf/common.sh@295 -- # local -ga e810 00:13:20.282 01:21:11 -- nvmf/common.sh@296 -- # x722=() 00:13:20.282 01:21:11 -- nvmf/common.sh@296 -- # local -ga x722 00:13:20.282 01:21:11 -- nvmf/common.sh@297 -- # mlx=() 00:13:20.282 01:21:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:20.282 01:21:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.282 01:21:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:20.282 01:21:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:20.282 01:21:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:20.282 01:21:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:20.282 01:21:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:20.282 01:21:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:20.282 01:21:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.282 01:21:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:20.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:20.282 01:21:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.282 01:21:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.282 01:21:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.282 01:21:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.282 01:21:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.282 01:21:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.282 01:21:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:20.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:20.282 01:21:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.282 01:21:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:20.283 01:21:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.283 01:21:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.283 01:21:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.283 01:21:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.283 01:21:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:20.283 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:20.283 01:21:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.283 01:21:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.283 01:21:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.283 01:21:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.283 01:21:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.283 01:21:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:20.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:20.283 01:21:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.283 01:21:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:20.283 01:21:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:20.283 01:21:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:20.283 01:21:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.283 01:21:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.283 01:21:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.283 01:21:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:20.283 01:21:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.283 01:21:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.283 01:21:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:20.283 01:21:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.283 01:21:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.283 01:21:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:20.283 01:21:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:20.283 01:21:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.283 01:21:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.283 01:21:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.283 01:21:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.283 01:21:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:20.283 01:21:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.283 01:21:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.283 01:21:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.283 01:21:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:20.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:13:20.283 00:13:20.283 --- 10.0.0.2 ping statistics --- 00:13:20.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.283 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:13:20.283 01:21:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:13:20.283 00:13:20.283 --- 10.0.0.1 ping statistics --- 00:13:20.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.283 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:13:20.283 01:21:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.283 01:21:11 -- nvmf/common.sh@410 -- # return 0 00:13:20.283 01:21:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:20.283 01:21:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.283 01:21:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:20.283 01:21:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.283 01:21:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:20.283 01:21:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:20.283 01:21:11 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:20.283 01:21:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:20.283 01:21:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:20.283 01:21:11 -- common/autotest_common.sh@10 -- # set +x 00:13:20.283 01:21:11 -- nvmf/common.sh@469 -- # nvmfpid=603548 00:13:20.283 01:21:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:20.283 01:21:11 -- nvmf/common.sh@470 -- # waitforlisten 603548 00:13:20.283 01:21:11 -- common/autotest_common.sh@819 -- # '[' -z 603548 ']' 00:13:20.283 01:21:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.283 01:21:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:20.283 01:21:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.283 01:21:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:20.283 01:21:11 -- common/autotest_common.sh@10 -- # set +x 00:13:20.283 [2024-07-27 01:21:11.713442] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:20.283 [2024-07-27 01:21:11.713529] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.283 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.283 [2024-07-27 01:21:11.786129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:20.283 [2024-07-27 01:21:11.899992] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:20.283 [2024-07-27 01:21:11.900173] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.283 [2024-07-27 01:21:11.900195] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.283 [2024-07-27 01:21:11.900208] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.283 [2024-07-27 01:21:11.900256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.283 [2024-07-27 01:21:11.900261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.222 01:21:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:21.222 01:21:12 -- common/autotest_common.sh@852 -- # return 0 00:13:21.222 01:21:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:21.222 01:21:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:21.222 01:21:12 -- common/autotest_common.sh@10 -- # set +x 00:13:21.222 01:21:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.222 01:21:12 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.222 01:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.222 01:21:12 -- common/autotest_common.sh@10 -- # set +x 00:13:21.222 [2024-07-27 01:21:12.685635] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.222 01:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.222 01:21:12 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.222 01:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.222 01:21:12 -- common/autotest_common.sh@10 -- # set +x 00:13:21.222 01:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.222 01:21:12 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.222 01:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.222 01:21:12 -- common/autotest_common.sh@10 -- # set +x 00:13:21.222 [2024-07-27 01:21:12.701843] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.222 01:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.222 01:21:12 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:21.222 01:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.222 01:21:12 -- common/autotest_common.sh@10 -- # set +x 00:13:21.222 NULL1 00:13:21.222 01:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.222 01:21:12 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:21.222 01:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.222 01:21:12 -- common/autotest_common.sh@10 -- # set +x 00:13:21.222 Delay0 00:13:21.222 01:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.222 01:21:12 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.222 01:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.222 01:21:12 -- common/autotest_common.sh@10 -- # set +x 00:13:21.222 01:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.222 01:21:12 -- target/delete_subsystem.sh@28 -- # perf_pid=603704 00:13:21.222 01:21:12 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:21.222 01:21:12 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:21.223 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.223 [2024-07-27 01:21:12.776672] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:23.129 01:21:14 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.129 01:21:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.129 01:21:14 -- common/autotest_common.sh@10 -- # set +x 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 [2024-07-27 01:21:15.027458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f972c00c350 is same with the state(5) to be set 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 starting I/O failed: -6 00:13:23.389 Read completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.389 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 starting I/O failed: -6 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 starting I/O failed: -6 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 starting I/O failed: -6 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 starting I/O failed: -6 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 starting I/O failed: -6 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 starting I/O failed: -6 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 starting I/O failed: -6 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 starting I/O failed: -6 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 [2024-07-27 01:21:15.029190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058e10 is same with the state(5) to be set 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Write completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:23.390 Read completed with error (sct=0, sc=8) 00:13:24.327 [2024-07-27 01:21:15.998092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10785a0 is same with the state(5) to be set 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 [2024-07-27 01:21:16.031014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f972c000c00 is same with the state(5) to be set 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 [2024-07-27 01:21:16.032316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f972c00c600 is same with the state(5) to be set 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 [2024-07-27 01:21:16.032512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f972c00bf20 is same with the state(5) to be set 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 Write completed with error (sct=0, sc=8) 00:13:24.327 Read completed with error (sct=0, sc=8) 00:13:24.327 [2024-07-27 01:21:16.032639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10599b0 is same with the state(5) to be set 00:13:24.327 [2024-07-27 01:21:16.033726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10785a0 (9): Bad file descriptor 00:13:24.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:24.327 01:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.327 01:21:16 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:24.327 01:21:16 -- target/delete_subsystem.sh@35 -- # kill -0 603704 00:13:24.327 01:21:16 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:24.327 Initializing NVMe Controllers 00:13:24.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:24.327 Controller IO queue size 128, less than required. 00:13:24.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:24.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:24.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:24.327 Initialization complete. Launching workers. 00:13:24.327 ======================================================== 00:13:24.327 Latency(us) 00:13:24.327 Device Information : IOPS MiB/s Average min max 00:13:24.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 147.87 0.07 909311.78 300.70 1013660.71 00:13:24.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.69 0.08 965040.73 1192.31 1012129.02 00:13:24.327 ======================================================== 00:13:24.327 Total : 319.56 0.16 939253.11 300.70 1013660.71 00:13:24.327 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@35 -- # kill -0 603704 00:13:24.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (603704) - No such process 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@45 -- # NOT wait 603704 00:13:24.897 01:21:16 -- common/autotest_common.sh@640 -- # local es=0 00:13:24.897 01:21:16 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 603704 00:13:24.897 01:21:16 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:24.897 01:21:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:24.897 01:21:16 -- common/autotest_common.sh@632 -- # type -t wait 00:13:24.897 01:21:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:24.897 01:21:16 -- common/autotest_common.sh@643 -- # wait 603704 00:13:24.897 01:21:16 -- common/autotest_common.sh@643 -- # es=1 00:13:24.897 01:21:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:24.897 01:21:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:24.897 01:21:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:24.897 01:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.897 01:21:16 -- common/autotest_common.sh@10 -- # set +x 00:13:24.897 01:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.897 01:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.897 01:21:16 -- common/autotest_common.sh@10 -- # set +x 00:13:24.897 [2024-07-27 01:21:16.551748] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.897 01:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.897 01:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.897 01:21:16 -- common/autotest_common.sh@10 -- # set +x 00:13:24.897 01:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@54 -- # perf_pid=604118 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@57 -- # kill -0 604118 00:13:24.897 01:21:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:24.897 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.897 [2024-07-27 01:21:16.607939] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:25.466 01:21:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:25.466 01:21:17 -- target/delete_subsystem.sh@57 -- # kill -0 604118 00:13:25.466 01:21:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:26.035 01:21:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:26.035 01:21:17 -- target/delete_subsystem.sh@57 -- # kill -0 604118 00:13:26.035 01:21:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:26.605 01:21:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:26.605 01:21:18 -- target/delete_subsystem.sh@57 -- # kill -0 604118 00:13:26.605 01:21:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:26.864 01:21:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:26.864 01:21:18 -- target/delete_subsystem.sh@57 -- # kill -0 604118 00:13:26.864 01:21:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:27.431 01:21:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:27.431 01:21:19 -- target/delete_subsystem.sh@57 -- # kill -0 604118 00:13:27.431 01:21:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:28.002 01:21:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:28.002 01:21:19 -- target/delete_subsystem.sh@57 -- # kill -0 604118 00:13:28.002 01:21:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:28.002 Initializing NVMe Controllers 00:13:28.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:28.002 Controller IO queue size 128, less than required. 00:13:28.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:28.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:28.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:28.002 Initialization complete. Launching workers. 00:13:28.002 ======================================================== 00:13:28.002 Latency(us) 00:13:28.002 Device Information : IOPS MiB/s Average min max 00:13:28.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003126.35 1000213.61 1010562.83 00:13:28.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004831.31 1000316.35 1042957.87 00:13:28.002 ======================================================== 00:13:28.002 Total : 256.00 0.12 1003978.83 1000213.61 1042957.87 00:13:28.002 00:13:28.569 01:21:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:28.569 01:21:20 -- target/delete_subsystem.sh@57 -- # kill -0 604118 00:13:28.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (604118) - No such process 00:13:28.569 01:21:20 -- target/delete_subsystem.sh@67 -- # wait 604118 00:13:28.569 01:21:20 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:28.569 01:21:20 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:28.569 01:21:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:28.569 01:21:20 -- nvmf/common.sh@116 -- # sync 00:13:28.569 01:21:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:28.569 01:21:20 -- nvmf/common.sh@119 -- # set +e 00:13:28.569 01:21:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:28.569 01:21:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:28.569 rmmod nvme_tcp 00:13:28.569 rmmod nvme_fabrics 00:13:28.569 rmmod nvme_keyring 00:13:28.569 01:21:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:28.569 01:21:20 -- nvmf/common.sh@123 -- # set -e 00:13:28.569 01:21:20 -- nvmf/common.sh@124 -- # return 0 00:13:28.569 01:21:20 -- nvmf/common.sh@477 -- # '[' -n 603548 ']' 00:13:28.569 01:21:20 -- nvmf/common.sh@478 -- # killprocess 603548 00:13:28.569 01:21:20 -- common/autotest_common.sh@926 -- # '[' -z 603548 ']' 00:13:28.569 01:21:20 -- common/autotest_common.sh@930 -- # kill -0 603548 00:13:28.569 01:21:20 -- common/autotest_common.sh@931 -- # uname 00:13:28.569 01:21:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:28.569 01:21:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 603548 00:13:28.569 01:21:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:28.569 01:21:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:28.569 01:21:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 603548' 00:13:28.569 killing process with pid 603548 00:13:28.569 01:21:20 -- common/autotest_common.sh@945 -- # kill 603548 00:13:28.569 01:21:20 -- common/autotest_common.sh@950 -- # wait 603548 00:13:28.827 01:21:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:28.827 01:21:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:28.827 01:21:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:28.827 01:21:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.827 01:21:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:28.827 01:21:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.827 01:21:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.827 01:21:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.734 01:21:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:30.734 00:13:30.734 real 0m12.935s 00:13:30.734 user 0m29.432s 00:13:30.734 sys 0m2.931s 00:13:30.734 01:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.734 01:21:22 -- common/autotest_common.sh@10 -- # set +x 00:13:30.734 ************************************ 00:13:30.734 END TEST nvmf_delete_subsystem 00:13:30.734 ************************************ 00:13:30.994 01:21:22 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:13:30.994 01:21:22 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:30.994 01:21:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:30.994 01:21:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:30.994 01:21:22 -- common/autotest_common.sh@10 -- # set +x 00:13:30.994 ************************************ 00:13:30.994 START TEST nvmf_nvme_cli 00:13:30.994 ************************************ 00:13:30.994 01:21:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:30.994 * Looking for test storage... 00:13:30.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.994 01:21:22 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.994 01:21:22 -- nvmf/common.sh@7 -- # uname -s 00:13:30.994 01:21:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.994 01:21:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.994 01:21:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.994 01:21:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.994 01:21:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.994 01:21:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.994 01:21:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.994 01:21:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.994 01:21:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.994 01:21:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.994 01:21:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.994 01:21:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.994 01:21:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.994 01:21:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.994 01:21:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.994 01:21:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.994 01:21:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.994 01:21:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.994 01:21:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.994 01:21:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.994 01:21:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.994 01:21:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.994 01:21:22 -- paths/export.sh@5 -- # export PATH 00:13:30.994 01:21:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.994 01:21:22 -- nvmf/common.sh@46 -- # : 0 00:13:30.994 01:21:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:30.994 01:21:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:30.994 01:21:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:30.994 01:21:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.994 01:21:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.994 01:21:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:30.994 01:21:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:30.994 01:21:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:30.994 01:21:22 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.994 01:21:22 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:30.994 01:21:22 -- target/nvme_cli.sh@14 -- # devs=() 00:13:30.994 01:21:22 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:30.994 01:21:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:30.994 01:21:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.994 01:21:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:30.994 01:21:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:30.994 01:21:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:30.994 01:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.994 01:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.994 01:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.994 01:21:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:30.994 01:21:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:30.994 01:21:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:30.995 01:21:22 -- common/autotest_common.sh@10 -- # set +x 00:13:32.905 01:21:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:32.905 01:21:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:32.905 01:21:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:32.905 01:21:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:32.905 01:21:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:32.905 01:21:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:32.905 01:21:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:32.905 01:21:24 -- nvmf/common.sh@294 -- # net_devs=() 00:13:32.905 01:21:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:32.905 01:21:24 -- nvmf/common.sh@295 -- # e810=() 00:13:32.905 01:21:24 -- nvmf/common.sh@295 -- # local -ga e810 00:13:32.905 01:21:24 -- nvmf/common.sh@296 -- # x722=() 00:13:32.905 01:21:24 -- nvmf/common.sh@296 -- # local -ga x722 00:13:32.905 01:21:24 -- nvmf/common.sh@297 -- # mlx=() 00:13:32.905 01:21:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:32.905 01:21:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.905 01:21:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:32.905 01:21:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:32.905 01:21:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:32.905 01:21:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:32.905 01:21:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:32.905 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:32.905 01:21:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:32.905 01:21:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:32.905 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:32.905 01:21:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:32.905 01:21:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:32.905 01:21:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.905 01:21:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:32.905 01:21:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.905 01:21:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:32.905 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:32.905 01:21:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.905 01:21:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:32.905 01:21:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.905 01:21:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:32.905 01:21:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.905 01:21:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:32.905 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:32.905 01:21:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.905 01:21:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:32.905 01:21:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:32.905 01:21:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:32.905 01:21:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:32.905 01:21:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.905 01:21:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.905 01:21:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.905 01:21:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:32.905 01:21:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.905 01:21:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.905 01:21:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:32.905 01:21:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.906 01:21:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.906 01:21:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:32.906 01:21:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:32.906 01:21:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.906 01:21:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.906 01:21:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.906 01:21:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.906 01:21:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:32.906 01:21:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.906 01:21:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.906 01:21:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.906 01:21:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:32.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:13:32.906 00:13:32.906 --- 10.0.0.2 ping statistics --- 00:13:32.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.906 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:13:32.906 01:21:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:13:32.906 00:13:32.906 --- 10.0.0.1 ping statistics --- 00:13:32.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.906 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:13:32.906 01:21:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.906 01:21:24 -- nvmf/common.sh@410 -- # return 0 00:13:32.906 01:21:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:32.906 01:21:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.906 01:21:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:32.906 01:21:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:32.906 01:21:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.906 01:21:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:32.906 01:21:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:32.906 01:21:24 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:32.906 01:21:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:32.906 01:21:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:32.906 01:21:24 -- common/autotest_common.sh@10 -- # set +x 00:13:32.906 01:21:24 -- nvmf/common.sh@469 -- # nvmfpid=606484 00:13:32.906 01:21:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.906 01:21:24 -- nvmf/common.sh@470 -- # waitforlisten 606484 00:13:32.906 01:21:24 -- common/autotest_common.sh@819 -- # '[' -z 606484 ']' 00:13:32.906 01:21:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.906 01:21:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:32.906 01:21:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.906 01:21:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:32.906 01:21:24 -- common/autotest_common.sh@10 -- # set +x 00:13:32.906 [2024-07-27 01:21:24.651171] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:32.906 [2024-07-27 01:21:24.651240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.164 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.164 [2024-07-27 01:21:24.722159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.164 [2024-07-27 01:21:24.837489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:33.164 [2024-07-27 01:21:24.837645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.164 [2024-07-27 01:21:24.837665] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.164 [2024-07-27 01:21:24.837680] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.164 [2024-07-27 01:21:24.837749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.164 [2024-07-27 01:21:24.837819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.164 [2024-07-27 01:21:24.837909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.164 [2024-07-27 01:21:24.837912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.096 01:21:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:34.096 01:21:25 -- common/autotest_common.sh@852 -- # return 0 00:13:34.096 01:21:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:34.096 01:21:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:34.096 01:21:25 -- common/autotest_common.sh@10 -- # set +x 00:13:34.096 01:21:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.096 01:21:25 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:34.096 01:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.096 01:21:25 -- common/autotest_common.sh@10 -- # set +x 00:13:34.096 [2024-07-27 01:21:25.650631] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.096 01:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.096 01:21:25 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:34.096 01:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.096 01:21:25 -- common/autotest_common.sh@10 -- # set +x 00:13:34.096 Malloc0 00:13:34.096 01:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.096 01:21:25 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:34.096 01:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.096 01:21:25 -- common/autotest_common.sh@10 -- # set +x 00:13:34.096 Malloc1 00:13:34.096 01:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.096 01:21:25 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:34.096 01:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.096 01:21:25 -- common/autotest_common.sh@10 -- # set +x 00:13:34.096 01:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.096 01:21:25 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:34.096 01:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.096 01:21:25 -- common/autotest_common.sh@10 -- # set +x 00:13:34.096 01:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.096 01:21:25 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.096 01:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.096 01:21:25 -- common/autotest_common.sh@10 -- # set +x 00:13:34.096 01:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.096 01:21:25 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.096 01:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.096 01:21:25 -- common/autotest_common.sh@10 -- # set +x 00:13:34.096 [2024-07-27 01:21:25.735033] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.096 01:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.096 01:21:25 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:34.096 01:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.096 01:21:25 -- common/autotest_common.sh@10 -- # set +x 00:13:34.096 01:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.096 01:21:25 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:34.096 00:13:34.096 Discovery Log Number of Records 2, Generation counter 2 00:13:34.097 =====Discovery Log Entry 0====== 00:13:34.097 trtype: tcp 00:13:34.097 adrfam: ipv4 00:13:34.097 subtype: current discovery subsystem 00:13:34.097 treq: not required 00:13:34.097 portid: 0 00:13:34.097 trsvcid: 4420 00:13:34.097 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:34.097 traddr: 10.0.0.2 00:13:34.097 eflags: explicit discovery connections, duplicate discovery information 00:13:34.097 sectype: none 00:13:34.097 =====Discovery Log Entry 1====== 00:13:34.097 trtype: tcp 00:13:34.097 adrfam: ipv4 00:13:34.097 subtype: nvme subsystem 00:13:34.097 treq: not required 00:13:34.097 portid: 0 00:13:34.097 trsvcid: 4420 00:13:34.097 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:34.097 traddr: 10.0.0.2 00:13:34.097 eflags: none 00:13:34.097 sectype: none 00:13:34.097 01:21:25 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:34.097 01:21:25 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:34.097 01:21:25 -- nvmf/common.sh@510 -- # local dev _ 00:13:34.097 01:21:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:34.097 01:21:25 -- nvmf/common.sh@509 -- # nvme list 00:13:34.097 01:21:25 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:34.097 01:21:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:34.097 01:21:25 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:34.097 01:21:25 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:34.097 01:21:25 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:34.097 01:21:25 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.661 01:21:26 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:34.661 01:21:26 -- common/autotest_common.sh@1177 -- # local i=0 00:13:34.661 01:21:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.661 01:21:26 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:13:34.661 01:21:26 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:13:34.661 01:21:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:37.186 01:21:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:37.186 01:21:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:37.186 01:21:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.186 01:21:28 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:13:37.186 01:21:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.186 01:21:28 -- common/autotest_common.sh@1187 -- # return 0 00:13:37.186 01:21:28 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:37.186 01:21:28 -- nvmf/common.sh@510 -- # local dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@509 -- # nvme list 00:13:37.186 01:21:28 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:37.186 /dev/nvme0n1 ]] 00:13:37.186 01:21:28 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:37.186 01:21:28 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:37.186 01:21:28 -- nvmf/common.sh@510 -- # local dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@509 -- # nvme list 00:13:37.186 01:21:28 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:37.186 01:21:28 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:37.186 01:21:28 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:37.186 01:21:28 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.186 01:21:28 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.186 01:21:28 -- common/autotest_common.sh@1198 -- # local i=0 00:13:37.186 01:21:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:37.186 01:21:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.186 01:21:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:37.186 01:21:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.186 01:21:28 -- common/autotest_common.sh@1210 -- # return 0 00:13:37.186 01:21:28 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:37.186 01:21:28 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.186 01:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.186 01:21:28 -- common/autotest_common.sh@10 -- # set +x 00:13:37.186 01:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.186 01:21:28 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:37.186 01:21:28 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:37.186 01:21:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:37.186 01:21:28 -- nvmf/common.sh@116 -- # sync 00:13:37.186 01:21:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:37.186 01:21:28 -- nvmf/common.sh@119 -- # set +e 00:13:37.186 01:21:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:37.186 01:21:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:37.186 rmmod nvme_tcp 00:13:37.186 rmmod nvme_fabrics 00:13:37.186 rmmod nvme_keyring 00:13:37.186 01:21:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:37.186 01:21:28 -- nvmf/common.sh@123 -- # set -e 00:13:37.186 01:21:28 -- nvmf/common.sh@124 -- # return 0 00:13:37.186 01:21:28 -- nvmf/common.sh@477 -- # '[' -n 606484 ']' 00:13:37.186 01:21:28 -- nvmf/common.sh@478 -- # killprocess 606484 00:13:37.186 01:21:28 -- common/autotest_common.sh@926 -- # '[' -z 606484 ']' 00:13:37.186 01:21:28 -- common/autotest_common.sh@930 -- # kill -0 606484 00:13:37.186 01:21:28 -- common/autotest_common.sh@931 -- # uname 00:13:37.186 01:21:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:37.186 01:21:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 606484 00:13:37.186 01:21:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:37.186 01:21:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:37.186 01:21:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 606484' 00:13:37.186 killing process with pid 606484 00:13:37.186 01:21:28 -- common/autotest_common.sh@945 -- # kill 606484 00:13:37.186 01:21:28 -- common/autotest_common.sh@950 -- # wait 606484 00:13:37.186 01:21:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:37.186 01:21:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:37.186 01:21:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.186 01:21:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:37.187 01:21:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.187 01:21:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.187 01:21:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.715 01:21:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:39.715 00:13:39.715 real 0m8.486s 00:13:39.715 user 0m16.875s 00:13:39.715 sys 0m2.119s 00:13:39.715 01:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.715 01:21:30 -- common/autotest_common.sh@10 -- # set +x 00:13:39.715 ************************************ 00:13:39.715 END TEST nvmf_nvme_cli 00:13:39.716 ************************************ 00:13:39.716 01:21:31 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:39.716 01:21:31 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:39.716 01:21:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:39.716 01:21:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:39.716 01:21:31 -- common/autotest_common.sh@10 -- # set +x 00:13:39.716 ************************************ 00:13:39.716 START TEST nvmf_host_management 00:13:39.716 ************************************ 00:13:39.716 01:21:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:39.716 * Looking for test storage... 00:13:39.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.716 01:21:31 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.716 01:21:31 -- nvmf/common.sh@7 -- # uname -s 00:13:39.716 01:21:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.716 01:21:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.716 01:21:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.716 01:21:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.716 01:21:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.716 01:21:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.716 01:21:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.716 01:21:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.716 01:21:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.716 01:21:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.716 01:21:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.716 01:21:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.716 01:21:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.716 01:21:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.716 01:21:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.716 01:21:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.716 01:21:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.716 01:21:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.716 01:21:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.716 01:21:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.716 01:21:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.716 01:21:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.716 01:21:31 -- paths/export.sh@5 -- # export PATH 00:13:39.716 01:21:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.716 01:21:31 -- nvmf/common.sh@46 -- # : 0 00:13:39.716 01:21:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:39.716 01:21:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:39.716 01:21:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:39.716 01:21:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.716 01:21:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.716 01:21:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:39.716 01:21:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:39.716 01:21:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:39.716 01:21:31 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.716 01:21:31 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.716 01:21:31 -- target/host_management.sh@104 -- # nvmftestinit 00:13:39.716 01:21:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:39.716 01:21:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.716 01:21:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:39.716 01:21:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:39.716 01:21:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:39.716 01:21:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.716 01:21:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.716 01:21:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.716 01:21:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:39.716 01:21:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:39.716 01:21:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:39.716 01:21:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.617 01:21:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:41.617 01:21:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:41.617 01:21:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:41.617 01:21:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:41.617 01:21:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:41.617 01:21:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:41.617 01:21:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:41.617 01:21:33 -- nvmf/common.sh@294 -- # net_devs=() 00:13:41.617 01:21:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:41.617 01:21:33 -- nvmf/common.sh@295 -- # e810=() 00:13:41.617 01:21:33 -- nvmf/common.sh@295 -- # local -ga e810 00:13:41.617 01:21:33 -- nvmf/common.sh@296 -- # x722=() 00:13:41.617 01:21:33 -- nvmf/common.sh@296 -- # local -ga x722 00:13:41.617 01:21:33 -- nvmf/common.sh@297 -- # mlx=() 00:13:41.617 01:21:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:41.617 01:21:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.617 01:21:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:41.617 01:21:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:41.617 01:21:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:41.617 01:21:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:41.617 01:21:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:41.617 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:41.617 01:21:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:41.617 01:21:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:41.617 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:41.617 01:21:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:41.617 01:21:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:41.617 01:21:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.617 01:21:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:41.617 01:21:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.617 01:21:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:41.617 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:41.617 01:21:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.617 01:21:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:41.617 01:21:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.617 01:21:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:41.617 01:21:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.617 01:21:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:41.617 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:41.617 01:21:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.617 01:21:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:41.617 01:21:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:41.617 01:21:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:41.617 01:21:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.617 01:21:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.617 01:21:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.617 01:21:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:41.617 01:21:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.617 01:21:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.617 01:21:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:41.617 01:21:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.617 01:21:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.617 01:21:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:41.617 01:21:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:41.617 01:21:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.617 01:21:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.617 01:21:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.617 01:21:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.617 01:21:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:41.617 01:21:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.617 01:21:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.617 01:21:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.617 01:21:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:41.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:13:41.617 00:13:41.617 --- 10.0.0.2 ping statistics --- 00:13:41.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.617 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:41.617 01:21:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:13:41.617 00:13:41.617 --- 10.0.0.1 ping statistics --- 00:13:41.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.617 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:13:41.617 01:21:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.617 01:21:33 -- nvmf/common.sh@410 -- # return 0 00:13:41.617 01:21:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:41.617 01:21:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.617 01:21:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:41.617 01:21:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.617 01:21:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:41.617 01:21:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:41.617 01:21:33 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:41.617 01:21:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:41.617 01:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.617 01:21:33 -- common/autotest_common.sh@10 -- # set +x 00:13:41.617 ************************************ 00:13:41.617 START TEST nvmf_host_management 00:13:41.617 ************************************ 00:13:41.617 01:21:33 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:41.617 01:21:33 -- target/host_management.sh@69 -- # starttarget 00:13:41.617 01:21:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:41.617 01:21:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:41.617 01:21:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:41.617 01:21:33 -- common/autotest_common.sh@10 -- # set +x 00:13:41.617 01:21:33 -- nvmf/common.sh@469 -- # nvmfpid=609025 00:13:41.617 01:21:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:41.617 01:21:33 -- nvmf/common.sh@470 -- # waitforlisten 609025 00:13:41.617 01:21:33 -- common/autotest_common.sh@819 -- # '[' -z 609025 ']' 00:13:41.617 01:21:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.617 01:21:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:41.617 01:21:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.617 01:21:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:41.617 01:21:33 -- common/autotest_common.sh@10 -- # set +x 00:13:41.617 [2024-07-27 01:21:33.332812] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:41.617 [2024-07-27 01:21:33.332893] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.617 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.874 [2024-07-27 01:21:33.397500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.874 [2024-07-27 01:21:33.507598] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:41.874 [2024-07-27 01:21:33.507741] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.874 [2024-07-27 01:21:33.507759] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.874 [2024-07-27 01:21:33.507771] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.875 [2024-07-27 01:21:33.507866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.875 [2024-07-27 01:21:33.507910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.875 [2024-07-27 01:21:33.507968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:41.875 [2024-07-27 01:21:33.507970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.804 01:21:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:42.804 01:21:34 -- common/autotest_common.sh@852 -- # return 0 00:13:42.804 01:21:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:42.804 01:21:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:42.804 01:21:34 -- common/autotest_common.sh@10 -- # set +x 00:13:42.804 01:21:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.804 01:21:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.804 01:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.804 01:21:34 -- common/autotest_common.sh@10 -- # set +x 00:13:42.804 [2024-07-27 01:21:34.334622] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.804 01:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.804 01:21:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:42.804 01:21:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:42.804 01:21:34 -- common/autotest_common.sh@10 -- # set +x 00:13:42.804 01:21:34 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:42.804 01:21:34 -- target/host_management.sh@23 -- # cat 00:13:42.804 01:21:34 -- target/host_management.sh@30 -- # rpc_cmd 00:13:42.804 01:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.804 01:21:34 -- common/autotest_common.sh@10 -- # set +x 00:13:42.804 Malloc0 00:13:42.804 [2024-07-27 01:21:34.393709] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.804 01:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.804 01:21:34 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:42.804 01:21:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:42.804 01:21:34 -- common/autotest_common.sh@10 -- # set +x 00:13:42.804 01:21:34 -- target/host_management.sh@73 -- # perfpid=609201 00:13:42.804 01:21:34 -- target/host_management.sh@74 -- # waitforlisten 609201 /var/tmp/bdevperf.sock 00:13:42.804 01:21:34 -- common/autotest_common.sh@819 -- # '[' -z 609201 ']' 00:13:42.804 01:21:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.804 01:21:34 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:42.804 01:21:34 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:42.804 01:21:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.804 01:21:34 -- nvmf/common.sh@520 -- # config=() 00:13:42.804 01:21:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.804 01:21:34 -- nvmf/common.sh@520 -- # local subsystem config 00:13:42.804 01:21:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.804 01:21:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:42.804 01:21:34 -- common/autotest_common.sh@10 -- # set +x 00:13:42.804 01:21:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:42.804 { 00:13:42.804 "params": { 00:13:42.804 "name": "Nvme$subsystem", 00:13:42.804 "trtype": "$TEST_TRANSPORT", 00:13:42.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.804 "adrfam": "ipv4", 00:13:42.804 "trsvcid": "$NVMF_PORT", 00:13:42.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.804 "hdgst": ${hdgst:-false}, 00:13:42.804 "ddgst": ${ddgst:-false} 00:13:42.804 }, 00:13:42.804 "method": "bdev_nvme_attach_controller" 00:13:42.804 } 00:13:42.804 EOF 00:13:42.804 )") 00:13:42.804 01:21:34 -- nvmf/common.sh@542 -- # cat 00:13:42.804 01:21:34 -- nvmf/common.sh@544 -- # jq . 00:13:42.804 01:21:34 -- nvmf/common.sh@545 -- # IFS=, 00:13:42.804 01:21:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:42.804 "params": { 00:13:42.804 "name": "Nvme0", 00:13:42.804 "trtype": "tcp", 00:13:42.804 "traddr": "10.0.0.2", 00:13:42.804 "adrfam": "ipv4", 00:13:42.804 "trsvcid": "4420", 00:13:42.804 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:42.804 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:42.804 "hdgst": false, 00:13:42.804 "ddgst": false 00:13:42.804 }, 00:13:42.804 "method": "bdev_nvme_attach_controller" 00:13:42.804 }' 00:13:42.804 [2024-07-27 01:21:34.460488] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:42.804 [2024-07-27 01:21:34.460576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609201 ] 00:13:42.804 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.804 [2024-07-27 01:21:34.522098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.062 [2024-07-27 01:21:34.629879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.322 Running I/O for 10 seconds... 00:13:43.888 01:21:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:43.888 01:21:35 -- common/autotest_common.sh@852 -- # return 0 00:13:43.888 01:21:35 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:43.888 01:21:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.888 01:21:35 -- common/autotest_common.sh@10 -- # set +x 00:13:43.888 01:21:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.888 01:21:35 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:43.888 01:21:35 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:43.888 01:21:35 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:43.888 01:21:35 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:43.888 01:21:35 -- target/host_management.sh@52 -- # local ret=1 00:13:43.888 01:21:35 -- target/host_management.sh@53 -- # local i 00:13:43.888 01:21:35 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:43.888 01:21:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:43.888 01:21:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:43.888 01:21:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:43.888 01:21:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.888 01:21:35 -- common/autotest_common.sh@10 -- # set +x 00:13:43.888 01:21:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.888 01:21:35 -- target/host_management.sh@55 -- # read_io_count=1093 00:13:43.888 01:21:35 -- target/host_management.sh@58 -- # '[' 1093 -ge 100 ']' 00:13:43.888 01:21:35 -- target/host_management.sh@59 -- # ret=0 00:13:43.888 01:21:35 -- target/host_management.sh@60 -- # break 00:13:43.888 01:21:35 -- target/host_management.sh@64 -- # return 0 00:13:43.888 01:21:35 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:43.888 01:21:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.888 01:21:35 -- common/autotest_common.sh@10 -- # set +x 00:13:43.888 [2024-07-27 01:21:35.449590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.888 [2024-07-27 01:21:35.449935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.449948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.449960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.449973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.449985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.449997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299480 is same with the state(5) to be set 00:13:43.889 [2024-07-27 01:21:35.450830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.450872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.450902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.450919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.450937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.450951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.450968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.450982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.450998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.889 [2024-07-27 01:21:35.451653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.889 [2024-07-27 01:21:35.451667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.451972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.451988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.890 [2024-07-27 01:21:35.452831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.890 [2024-07-27 01:21:35.452847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.891 [2024-07-27 01:21:35.452861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.891 [2024-07-27 01:21:35.452876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.891 [2024-07-27 01:21:35.452890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.891 [2024-07-27 01:21:35.452905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4fb50 is same with the state(5) to be set 00:13:43.891 [2024-07-27 01:21:35.452979] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf4fb50 was disconnected and freed. reset controller. 00:13:43.891 01:21:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.891 01:21:35 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:43.891 01:21:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.891 01:21:35 -- common/autotest_common.sh@10 -- # set +x 00:13:43.891 [2024-07-27 01:21:35.454152] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:43.891 task offset: 27520 on job bdev=Nvme0n1 fails 00:13:43.891 00:13:43.891 Latency(us) 00:13:43.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.891 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:43.891 Job: Nvme0n1 ended in about 0.48 seconds with error 00:13:43.891 Verification LBA range: start 0x0 length 0x400 00:13:43.891 Nvme0n1 : 0.48 2528.58 158.04 132.65 0.00 23704.74 3665.16 30098.01 00:13:43.891 =================================================================================================================== 00:13:43.891 Total : 2528.58 158.04 132.65 0.00 23704.74 3665.16 30098.01 00:13:43.891 [2024-07-27 01:21:35.456187] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:43.891 [2024-07-27 01:21:35.456219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52400 (9): Bad file descriptor 00:13:43.891 01:21:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.891 01:21:35 -- target/host_management.sh@87 -- # sleep 1 00:13:43.891 [2024-07-27 01:21:35.516740] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:44.824 01:21:36 -- target/host_management.sh@91 -- # kill -9 609201 00:13:44.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (609201) - No such process 00:13:44.824 01:21:36 -- target/host_management.sh@91 -- # true 00:13:44.824 01:21:36 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:44.824 01:21:36 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:44.824 01:21:36 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:44.824 01:21:36 -- nvmf/common.sh@520 -- # config=() 00:13:44.824 01:21:36 -- nvmf/common.sh@520 -- # local subsystem config 00:13:44.824 01:21:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:44.824 01:21:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:44.824 { 00:13:44.824 "params": { 00:13:44.824 "name": "Nvme$subsystem", 00:13:44.824 "trtype": "$TEST_TRANSPORT", 00:13:44.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:44.824 "adrfam": "ipv4", 00:13:44.824 "trsvcid": "$NVMF_PORT", 00:13:44.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:44.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:44.824 "hdgst": ${hdgst:-false}, 00:13:44.824 "ddgst": ${ddgst:-false} 00:13:44.824 }, 00:13:44.824 "method": "bdev_nvme_attach_controller" 00:13:44.824 } 00:13:44.824 EOF 00:13:44.824 )") 00:13:44.824 01:21:36 -- nvmf/common.sh@542 -- # cat 00:13:44.824 01:21:36 -- nvmf/common.sh@544 -- # jq . 00:13:44.824 01:21:36 -- nvmf/common.sh@545 -- # IFS=, 00:13:44.824 01:21:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:44.824 "params": { 00:13:44.824 "name": "Nvme0", 00:13:44.824 "trtype": "tcp", 00:13:44.824 "traddr": "10.0.0.2", 00:13:44.824 "adrfam": "ipv4", 00:13:44.824 "trsvcid": "4420", 00:13:44.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:44.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:44.824 "hdgst": false, 00:13:44.824 "ddgst": false 00:13:44.824 }, 00:13:44.824 "method": "bdev_nvme_attach_controller" 00:13:44.824 }' 00:13:44.824 [2024-07-27 01:21:36.505615] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:44.824 [2024-07-27 01:21:36.505699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609482 ] 00:13:44.824 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.824 [2024-07-27 01:21:36.565298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.082 [2024-07-27 01:21:36.673546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.340 Running I/O for 1 seconds... 00:13:46.714 00:13:46.714 Latency(us) 00:13:46.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.714 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:46.714 Verification LBA range: start 0x0 length 0x400 00:13:46.714 Nvme0n1 : 1.05 2276.04 142.25 0.00 0.00 26666.29 4369.07 45632.47 00:13:46.714 =================================================================================================================== 00:13:46.714 Total : 2276.04 142.25 0.00 0.00 26666.29 4369.07 45632.47 00:13:46.714 01:21:38 -- target/host_management.sh@101 -- # stoptarget 00:13:46.714 01:21:38 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:46.714 01:21:38 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:46.714 01:21:38 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:46.714 01:21:38 -- target/host_management.sh@40 -- # nvmftestfini 00:13:46.714 01:21:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:46.714 01:21:38 -- nvmf/common.sh@116 -- # sync 00:13:46.714 01:21:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:46.714 01:21:38 -- nvmf/common.sh@119 -- # set +e 00:13:46.714 01:21:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:46.714 01:21:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:46.714 rmmod nvme_tcp 00:13:46.714 rmmod nvme_fabrics 00:13:46.714 rmmod nvme_keyring 00:13:46.714 01:21:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:46.714 01:21:38 -- nvmf/common.sh@123 -- # set -e 00:13:46.714 01:21:38 -- nvmf/common.sh@124 -- # return 0 00:13:46.714 01:21:38 -- nvmf/common.sh@477 -- # '[' -n 609025 ']' 00:13:46.714 01:21:38 -- nvmf/common.sh@478 -- # killprocess 609025 00:13:46.714 01:21:38 -- common/autotest_common.sh@926 -- # '[' -z 609025 ']' 00:13:46.714 01:21:38 -- common/autotest_common.sh@930 -- # kill -0 609025 00:13:46.714 01:21:38 -- common/autotest_common.sh@931 -- # uname 00:13:46.714 01:21:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:46.714 01:21:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 609025 00:13:46.714 01:21:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:46.714 01:21:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:46.714 01:21:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 609025' 00:13:46.714 killing process with pid 609025 00:13:46.714 01:21:38 -- common/autotest_common.sh@945 -- # kill 609025 00:13:46.714 01:21:38 -- common/autotest_common.sh@950 -- # wait 609025 00:13:46.973 [2024-07-27 01:21:38.659110] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:46.973 01:21:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:46.973 01:21:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:46.973 01:21:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:46.973 01:21:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.973 01:21:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:46.973 01:21:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.973 01:21:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.973 01:21:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.505 01:21:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:49.505 00:13:49.505 real 0m7.441s 00:13:49.505 user 0m23.403s 00:13:49.505 sys 0m1.325s 00:13:49.505 01:21:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.505 01:21:40 -- common/autotest_common.sh@10 -- # set +x 00:13:49.505 ************************************ 00:13:49.505 END TEST nvmf_host_management 00:13:49.505 ************************************ 00:13:49.505 01:21:40 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:49.505 00:13:49.505 real 0m9.737s 00:13:49.505 user 0m24.208s 00:13:49.505 sys 0m2.843s 00:13:49.505 01:21:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.505 01:21:40 -- common/autotest_common.sh@10 -- # set +x 00:13:49.505 ************************************ 00:13:49.505 END TEST nvmf_host_management 00:13:49.505 ************************************ 00:13:49.505 01:21:40 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:49.505 01:21:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:49.505 01:21:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:49.505 01:21:40 -- common/autotest_common.sh@10 -- # set +x 00:13:49.505 ************************************ 00:13:49.505 START TEST nvmf_lvol 00:13:49.505 ************************************ 00:13:49.505 01:21:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:49.505 * Looking for test storage... 00:13:49.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.505 01:21:40 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.505 01:21:40 -- nvmf/common.sh@7 -- # uname -s 00:13:49.505 01:21:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.505 01:21:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.505 01:21:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.505 01:21:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.505 01:21:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.505 01:21:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.505 01:21:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.505 01:21:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.505 01:21:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.505 01:21:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.505 01:21:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.506 01:21:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.506 01:21:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.506 01:21:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.506 01:21:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.506 01:21:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.506 01:21:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.506 01:21:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.506 01:21:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.506 01:21:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.506 01:21:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.506 01:21:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.506 01:21:40 -- paths/export.sh@5 -- # export PATH 00:13:49.506 01:21:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.506 01:21:40 -- nvmf/common.sh@46 -- # : 0 00:13:49.506 01:21:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:49.506 01:21:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:49.506 01:21:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:49.506 01:21:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.506 01:21:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.506 01:21:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:49.506 01:21:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:49.506 01:21:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:49.506 01:21:40 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.506 01:21:40 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.506 01:21:40 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:49.506 01:21:40 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:49.506 01:21:40 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.506 01:21:40 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:49.506 01:21:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:49.506 01:21:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.506 01:21:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:49.506 01:21:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:49.506 01:21:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:49.506 01:21:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.506 01:21:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.506 01:21:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.506 01:21:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:49.506 01:21:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:49.506 01:21:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:49.506 01:21:40 -- common/autotest_common.sh@10 -- # set +x 00:13:51.410 01:21:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:51.410 01:21:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:51.410 01:21:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:51.410 01:21:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:51.410 01:21:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:51.410 01:21:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:51.410 01:21:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:51.410 01:21:42 -- nvmf/common.sh@294 -- # net_devs=() 00:13:51.410 01:21:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:51.410 01:21:42 -- nvmf/common.sh@295 -- # e810=() 00:13:51.410 01:21:42 -- nvmf/common.sh@295 -- # local -ga e810 00:13:51.410 01:21:42 -- nvmf/common.sh@296 -- # x722=() 00:13:51.410 01:21:42 -- nvmf/common.sh@296 -- # local -ga x722 00:13:51.410 01:21:42 -- nvmf/common.sh@297 -- # mlx=() 00:13:51.410 01:21:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:51.410 01:21:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.410 01:21:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:51.410 01:21:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:51.410 01:21:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:51.410 01:21:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:51.410 01:21:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:51.410 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:51.410 01:21:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:51.410 01:21:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:51.410 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:51.410 01:21:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:51.410 01:21:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:51.410 01:21:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.410 01:21:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:51.410 01:21:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.410 01:21:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:51.410 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:51.410 01:21:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.410 01:21:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:51.410 01:21:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.410 01:21:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:51.410 01:21:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.410 01:21:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:51.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:51.410 01:21:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.410 01:21:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:51.410 01:21:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:51.410 01:21:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:51.410 01:21:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:51.410 01:21:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.410 01:21:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.410 01:21:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.410 01:21:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:51.410 01:21:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.410 01:21:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.410 01:21:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:51.410 01:21:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.410 01:21:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.410 01:21:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:51.410 01:21:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:51.410 01:21:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.410 01:21:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.410 01:21:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.410 01:21:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.411 01:21:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:51.411 01:21:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.411 01:21:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.411 01:21:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.411 01:21:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:51.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:13:51.411 00:13:51.411 --- 10.0.0.2 ping statistics --- 00:13:51.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.411 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:51.411 01:21:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:13:51.411 00:13:51.411 --- 10.0.0.1 ping statistics --- 00:13:51.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.411 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:13:51.411 01:21:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.411 01:21:42 -- nvmf/common.sh@410 -- # return 0 00:13:51.411 01:21:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:51.411 01:21:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.411 01:21:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:51.411 01:21:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:51.411 01:21:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.411 01:21:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:51.411 01:21:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:51.411 01:21:42 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:51.411 01:21:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:51.411 01:21:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:51.411 01:21:42 -- common/autotest_common.sh@10 -- # set +x 00:13:51.411 01:21:42 -- nvmf/common.sh@469 -- # nvmfpid=611602 00:13:51.411 01:21:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:51.411 01:21:42 -- nvmf/common.sh@470 -- # waitforlisten 611602 00:13:51.411 01:21:42 -- common/autotest_common.sh@819 -- # '[' -z 611602 ']' 00:13:51.411 01:21:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.411 01:21:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:51.411 01:21:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.411 01:21:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:51.411 01:21:42 -- common/autotest_common.sh@10 -- # set +x 00:13:51.411 [2024-07-27 01:21:43.032948] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:51.411 [2024-07-27 01:21:43.033015] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.411 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.411 [2024-07-27 01:21:43.097694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.702 [2024-07-27 01:21:43.205822] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.702 [2024-07-27 01:21:43.205975] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.702 [2024-07-27 01:21:43.205991] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.702 [2024-07-27 01:21:43.206004] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.702 [2024-07-27 01:21:43.206103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.702 [2024-07-27 01:21:43.206133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.702 [2024-07-27 01:21:43.206137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.287 01:21:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.287 01:21:44 -- common/autotest_common.sh@852 -- # return 0 00:13:52.287 01:21:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.287 01:21:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:52.287 01:21:44 -- common/autotest_common.sh@10 -- # set +x 00:13:52.544 01:21:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.544 01:21:44 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.801 [2024-07-27 01:21:44.315480] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.801 01:21:44 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.059 01:21:44 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:53.059 01:21:44 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.316 01:21:44 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:53.316 01:21:44 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:53.573 01:21:45 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:53.830 01:21:45 -- target/nvmf_lvol.sh@29 -- # lvs=e183115b-4b45-4383-b1ca-c34dc33360c6 00:13:53.830 01:21:45 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e183115b-4b45-4383-b1ca-c34dc33360c6 lvol 20 00:13:54.087 01:21:45 -- target/nvmf_lvol.sh@32 -- # lvol=eebf98c1-4fd1-48e9-a651-1eada1068412 00:13:54.087 01:21:45 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:54.087 01:21:45 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eebf98c1-4fd1-48e9-a651-1eada1068412 00:13:54.344 01:21:46 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:54.601 [2024-07-27 01:21:46.295470] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.601 01:21:46 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.859 01:21:46 -- target/nvmf_lvol.sh@42 -- # perf_pid=612163 00:13:54.859 01:21:46 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:54.859 01:21:46 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:54.859 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.238 01:21:47 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot eebf98c1-4fd1-48e9-a651-1eada1068412 MY_SNAPSHOT 00:13:56.238 01:21:47 -- target/nvmf_lvol.sh@47 -- # snapshot=d82ad7ba-ab5a-4dac-b9c8-117e2e827c87 00:13:56.238 01:21:47 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize eebf98c1-4fd1-48e9-a651-1eada1068412 30 00:13:56.496 01:21:48 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d82ad7ba-ab5a-4dac-b9c8-117e2e827c87 MY_CLONE 00:13:56.754 01:21:48 -- target/nvmf_lvol.sh@49 -- # clone=6d7915ad-3f42-423a-b2aa-a2078f3f7866 00:13:56.754 01:21:48 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6d7915ad-3f42-423a-b2aa-a2078f3f7866 00:13:57.320 01:21:48 -- target/nvmf_lvol.sh@53 -- # wait 612163 00:14:05.442 Initializing NVMe Controllers 00:14:05.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:05.442 Controller IO queue size 128, less than required. 00:14:05.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:05.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:05.442 Initialization complete. Launching workers. 00:14:05.442 ======================================================== 00:14:05.442 Latency(us) 00:14:05.442 Device Information : IOPS MiB/s Average min max 00:14:05.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8912.00 34.81 14372.66 1222.09 71679.01 00:14:05.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11445.90 44.71 11189.07 1688.77 65804.22 00:14:05.442 ======================================================== 00:14:05.442 Total : 20357.90 79.52 12582.74 1222.09 71679.01 00:14:05.442 00:14:05.442 01:21:56 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:05.702 01:21:57 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eebf98c1-4fd1-48e9-a651-1eada1068412 00:14:05.961 01:21:57 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e183115b-4b45-4383-b1ca-c34dc33360c6 00:14:05.961 01:21:57 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:05.961 01:21:57 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:05.961 01:21:57 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:05.961 01:21:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.961 01:21:57 -- nvmf/common.sh@116 -- # sync 00:14:05.961 01:21:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:05.961 01:21:57 -- nvmf/common.sh@119 -- # set +e 00:14:05.961 01:21:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.961 01:21:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:05.961 rmmod nvme_tcp 00:14:06.220 rmmod nvme_fabrics 00:14:06.221 rmmod nvme_keyring 00:14:06.221 01:21:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:06.221 01:21:57 -- nvmf/common.sh@123 -- # set -e 00:14:06.221 01:21:57 -- nvmf/common.sh@124 -- # return 0 00:14:06.221 01:21:57 -- nvmf/common.sh@477 -- # '[' -n 611602 ']' 00:14:06.221 01:21:57 -- nvmf/common.sh@478 -- # killprocess 611602 00:14:06.221 01:21:57 -- common/autotest_common.sh@926 -- # '[' -z 611602 ']' 00:14:06.221 01:21:57 -- common/autotest_common.sh@930 -- # kill -0 611602 00:14:06.221 01:21:57 -- common/autotest_common.sh@931 -- # uname 00:14:06.221 01:21:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:06.221 01:21:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 611602 00:14:06.221 01:21:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:06.221 01:21:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:06.221 01:21:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 611602' 00:14:06.221 killing process with pid 611602 00:14:06.221 01:21:57 -- common/autotest_common.sh@945 -- # kill 611602 00:14:06.221 01:21:57 -- common/autotest_common.sh@950 -- # wait 611602 00:14:06.479 01:21:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:06.479 01:21:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:06.479 01:21:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:06.479 01:21:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.479 01:21:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:06.479 01:21:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.479 01:21:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.479 01:21:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.013 01:22:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:09.013 00:14:09.013 real 0m19.373s 00:14:09.013 user 1m0.464s 00:14:09.013 sys 0m8.093s 00:14:09.013 01:22:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.013 01:22:00 -- common/autotest_common.sh@10 -- # set +x 00:14:09.013 ************************************ 00:14:09.013 END TEST nvmf_lvol 00:14:09.013 ************************************ 00:14:09.013 01:22:00 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:09.013 01:22:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:09.013 01:22:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:09.013 01:22:00 -- common/autotest_common.sh@10 -- # set +x 00:14:09.013 ************************************ 00:14:09.013 START TEST nvmf_lvs_grow 00:14:09.013 ************************************ 00:14:09.013 01:22:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:09.013 * Looking for test storage... 00:14:09.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.014 01:22:00 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.014 01:22:00 -- nvmf/common.sh@7 -- # uname -s 00:14:09.014 01:22:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.014 01:22:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.014 01:22:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.014 01:22:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.014 01:22:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.014 01:22:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.014 01:22:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.014 01:22:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.014 01:22:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.014 01:22:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.014 01:22:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.014 01:22:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.014 01:22:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.014 01:22:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.014 01:22:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.014 01:22:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.014 01:22:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.014 01:22:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.014 01:22:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.014 01:22:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.014 01:22:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.014 01:22:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.014 01:22:00 -- paths/export.sh@5 -- # export PATH 00:14:09.014 01:22:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.014 01:22:00 -- nvmf/common.sh@46 -- # : 0 00:14:09.014 01:22:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:09.014 01:22:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:09.014 01:22:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:09.014 01:22:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.014 01:22:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.014 01:22:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:09.014 01:22:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:09.014 01:22:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:09.014 01:22:00 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.014 01:22:00 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.014 01:22:00 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:09.014 01:22:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:09.014 01:22:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.014 01:22:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:09.014 01:22:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:09.014 01:22:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:09.014 01:22:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.014 01:22:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.014 01:22:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.014 01:22:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:09.014 01:22:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:09.014 01:22:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:09.014 01:22:00 -- common/autotest_common.sh@10 -- # set +x 00:14:10.916 01:22:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:10.916 01:22:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:10.916 01:22:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:10.916 01:22:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:10.916 01:22:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:10.916 01:22:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:10.916 01:22:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:10.916 01:22:02 -- nvmf/common.sh@294 -- # net_devs=() 00:14:10.916 01:22:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:10.916 01:22:02 -- nvmf/common.sh@295 -- # e810=() 00:14:10.916 01:22:02 -- nvmf/common.sh@295 -- # local -ga e810 00:14:10.916 01:22:02 -- nvmf/common.sh@296 -- # x722=() 00:14:10.916 01:22:02 -- nvmf/common.sh@296 -- # local -ga x722 00:14:10.916 01:22:02 -- nvmf/common.sh@297 -- # mlx=() 00:14:10.916 01:22:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:10.916 01:22:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.916 01:22:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:10.916 01:22:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:10.916 01:22:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:10.916 01:22:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:10.916 01:22:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:10.916 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:10.916 01:22:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:10.916 01:22:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:10.916 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:10.916 01:22:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:10.916 01:22:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:10.916 01:22:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.916 01:22:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:10.916 01:22:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.916 01:22:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:10.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:10.916 01:22:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.916 01:22:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:10.916 01:22:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.916 01:22:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:10.916 01:22:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.916 01:22:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:10.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:10.916 01:22:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.916 01:22:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:10.916 01:22:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:10.916 01:22:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:10.916 01:22:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:10.916 01:22:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.916 01:22:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.916 01:22:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.916 01:22:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:10.916 01:22:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.916 01:22:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.916 01:22:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:10.916 01:22:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.916 01:22:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.916 01:22:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:10.916 01:22:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:10.916 01:22:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.916 01:22:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.916 01:22:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.916 01:22:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.916 01:22:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:10.916 01:22:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.916 01:22:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.916 01:22:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.916 01:22:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:10.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:14:10.917 00:14:10.917 --- 10.0.0.2 ping statistics --- 00:14:10.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.917 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:14:10.917 01:22:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:14:10.917 00:14:10.917 --- 10.0.0.1 ping statistics --- 00:14:10.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.917 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:10.917 01:22:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.917 01:22:02 -- nvmf/common.sh@410 -- # return 0 00:14:10.917 01:22:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:10.917 01:22:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.917 01:22:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:10.917 01:22:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:10.917 01:22:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.917 01:22:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:10.917 01:22:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:10.917 01:22:02 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:10.917 01:22:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:10.917 01:22:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:10.917 01:22:02 -- common/autotest_common.sh@10 -- # set +x 00:14:10.917 01:22:02 -- nvmf/common.sh@469 -- # nvmfpid=615476 00:14:10.917 01:22:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:10.917 01:22:02 -- nvmf/common.sh@470 -- # waitforlisten 615476 00:14:10.917 01:22:02 -- common/autotest_common.sh@819 -- # '[' -z 615476 ']' 00:14:10.917 01:22:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.917 01:22:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:10.917 01:22:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.917 01:22:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:10.917 01:22:02 -- common/autotest_common.sh@10 -- # set +x 00:14:10.917 [2024-07-27 01:22:02.394686] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:10.917 [2024-07-27 01:22:02.394762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.917 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.917 [2024-07-27 01:22:02.463253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.917 [2024-07-27 01:22:02.575760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:10.917 [2024-07-27 01:22:02.575922] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.917 [2024-07-27 01:22:02.575941] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.917 [2024-07-27 01:22:02.575957] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.917 [2024-07-27 01:22:02.575996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.851 01:22:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:11.851 01:22:03 -- common/autotest_common.sh@852 -- # return 0 00:14:11.851 01:22:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:11.851 01:22:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:11.851 01:22:03 -- common/autotest_common.sh@10 -- # set +x 00:14:11.851 01:22:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.851 01:22:03 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.851 [2024-07-27 01:22:03.608334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:12.109 01:22:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:12.109 01:22:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:12.109 01:22:03 -- common/autotest_common.sh@10 -- # set +x 00:14:12.109 ************************************ 00:14:12.109 START TEST lvs_grow_clean 00:14:12.109 ************************************ 00:14:12.109 01:22:03 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:12.109 01:22:03 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:12.367 01:22:03 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:12.367 01:22:03 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:12.367 01:22:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d4747446-6120-446a-a34e-1970cc535eff 00:14:12.367 01:22:04 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4747446-6120-446a-a34e-1970cc535eff 00:14:12.367 01:22:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:12.937 01:22:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:12.937 01:22:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:12.937 01:22:04 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d4747446-6120-446a-a34e-1970cc535eff lvol 150 00:14:12.937 01:22:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=53879b3a-0a95-4ddf-b26c-397abe219b62 00:14:12.937 01:22:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:12.937 01:22:04 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:13.195 [2024-07-27 01:22:04.908473] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:13.195 [2024-07-27 01:22:04.908565] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:13.195 true 00:14:13.195 01:22:04 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4747446-6120-446a-a34e-1970cc535eff 00:14:13.195 01:22:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:13.455 01:22:05 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:13.455 01:22:05 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:13.715 01:22:05 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 53879b3a-0a95-4ddf-b26c-397abe219b62 00:14:13.973 01:22:05 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:14.235 [2024-07-27 01:22:05.891613] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.235 01:22:05 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:14.526 01:22:06 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=615927 00:14:14.526 01:22:06 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:14.526 01:22:06 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.526 01:22:06 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 615927 /var/tmp/bdevperf.sock 00:14:14.526 01:22:06 -- common/autotest_common.sh@819 -- # '[' -z 615927 ']' 00:14:14.526 01:22:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.526 01:22:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:14.526 01:22:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.526 01:22:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:14.526 01:22:06 -- common/autotest_common.sh@10 -- # set +x 00:14:14.526 [2024-07-27 01:22:06.180209] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:14.526 [2024-07-27 01:22:06.180297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615927 ] 00:14:14.526 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.526 [2024-07-27 01:22:06.239031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.785 [2024-07-27 01:22:06.345781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.721 01:22:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:15.721 01:22:07 -- common/autotest_common.sh@852 -- # return 0 00:14:15.721 01:22:07 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:15.979 Nvme0n1 00:14:15.980 01:22:07 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:16.238 [ 00:14:16.238 { 00:14:16.238 "name": "Nvme0n1", 00:14:16.238 "aliases": [ 00:14:16.238 "53879b3a-0a95-4ddf-b26c-397abe219b62" 00:14:16.238 ], 00:14:16.238 "product_name": "NVMe disk", 00:14:16.238 "block_size": 4096, 00:14:16.238 "num_blocks": 38912, 00:14:16.238 "uuid": "53879b3a-0a95-4ddf-b26c-397abe219b62", 00:14:16.238 "assigned_rate_limits": { 00:14:16.238 "rw_ios_per_sec": 0, 00:14:16.238 "rw_mbytes_per_sec": 0, 00:14:16.238 "r_mbytes_per_sec": 0, 00:14:16.238 "w_mbytes_per_sec": 0 00:14:16.238 }, 00:14:16.238 "claimed": false, 00:14:16.238 "zoned": false, 00:14:16.238 "supported_io_types": { 00:14:16.238 "read": true, 00:14:16.238 "write": true, 00:14:16.238 "unmap": true, 00:14:16.238 "write_zeroes": true, 00:14:16.238 "flush": true, 00:14:16.238 "reset": true, 00:14:16.238 "compare": true, 00:14:16.238 "compare_and_write": true, 00:14:16.238 "abort": true, 00:14:16.238 "nvme_admin": true, 00:14:16.238 "nvme_io": true 00:14:16.238 }, 00:14:16.238 "driver_specific": { 00:14:16.238 "nvme": [ 00:14:16.238 { 00:14:16.238 "trid": { 00:14:16.238 "trtype": "TCP", 00:14:16.238 "adrfam": "IPv4", 00:14:16.238 "traddr": "10.0.0.2", 00:14:16.238 "trsvcid": "4420", 00:14:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:16.238 }, 00:14:16.238 "ctrlr_data": { 00:14:16.238 "cntlid": 1, 00:14:16.238 "vendor_id": "0x8086", 00:14:16.238 "model_number": "SPDK bdev Controller", 00:14:16.238 "serial_number": "SPDK0", 00:14:16.238 "firmware_revision": "24.01.1", 00:14:16.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:16.238 "oacs": { 00:14:16.238 "security": 0, 00:14:16.238 "format": 0, 00:14:16.238 "firmware": 0, 00:14:16.238 "ns_manage": 0 00:14:16.238 }, 00:14:16.238 "multi_ctrlr": true, 00:14:16.238 "ana_reporting": false 00:14:16.238 }, 00:14:16.238 "vs": { 00:14:16.238 "nvme_version": "1.3" 00:14:16.238 }, 00:14:16.238 "ns_data": { 00:14:16.238 "id": 1, 00:14:16.238 "can_share": true 00:14:16.238 } 00:14:16.238 } 00:14:16.238 ], 00:14:16.238 "mp_policy": "active_passive" 00:14:16.238 } 00:14:16.238 } 00:14:16.238 ] 00:14:16.238 01:22:07 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=616084 00:14:16.238 01:22:07 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.238 01:22:07 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:16.238 Running I/O for 10 seconds... 00:14:17.177 Latency(us) 00:14:17.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.178 Nvme0n1 : 1.00 14850.00 58.01 0.00 0.00 0.00 0.00 0.00 00:14:17.178 =================================================================================================================== 00:14:17.178 Total : 14850.00 58.01 0.00 0.00 0.00 0.00 0.00 00:14:17.178 00:14:18.112 01:22:09 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d4747446-6120-446a-a34e-1970cc535eff 00:14:18.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.112 Nvme0n1 : 2.00 14980.00 58.52 0.00 0.00 0.00 0.00 0.00 00:14:18.112 =================================================================================================================== 00:14:18.112 Total : 14980.00 58.52 0.00 0.00 0.00 0.00 0.00 00:14:18.112 00:14:18.370 true 00:14:18.370 01:22:10 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4747446-6120-446a-a34e-1970cc535eff 00:14:18.370 01:22:10 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:18.629 01:22:10 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:18.629 01:22:10 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:18.629 01:22:10 -- target/nvmf_lvs_grow.sh@65 -- # wait 616084 00:14:19.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.198 Nvme0n1 : 3.00 15192.00 59.34 0.00 0.00 0.00 0.00 0.00 00:14:19.198 =================================================================================================================== 00:14:19.198 Total : 15192.00 59.34 0.00 0.00 0.00 0.00 0.00 00:14:19.198 00:14:20.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.137 Nvme0n1 : 4.00 15252.75 59.58 0.00 0.00 0.00 0.00 0.00 00:14:20.137 =================================================================================================================== 00:14:20.137 Total : 15252.75 59.58 0.00 0.00 0.00 0.00 0.00 00:14:20.137 00:14:21.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.516 Nvme0n1 : 5.00 15270.80 59.65 0.00 0.00 0.00 0.00 0.00 00:14:21.516 =================================================================================================================== 00:14:21.516 Total : 15270.80 59.65 0.00 0.00 0.00 0.00 0.00 00:14:21.516 00:14:22.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.453 Nvme0n1 : 6.00 15328.33 59.88 0.00 0.00 0.00 0.00 0.00 00:14:22.453 =================================================================================================================== 00:14:22.453 Total : 15328.33 59.88 0.00 0.00 0.00 0.00 0.00 00:14:22.453 00:14:23.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.392 Nvme0n1 : 7.00 15379.43 60.08 0.00 0.00 0.00 0.00 0.00 00:14:23.392 =================================================================================================================== 00:14:23.392 Total : 15379.43 60.08 0.00 0.00 0.00 0.00 0.00 00:14:23.392 00:14:24.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.328 Nvme0n1 : 8.00 15384.25 60.09 0.00 0.00 0.00 0.00 0.00 00:14:24.328 =================================================================================================================== 00:14:24.328 Total : 15384.25 60.09 0.00 0.00 0.00 0.00 0.00 00:14:24.328 00:14:25.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.266 Nvme0n1 : 9.00 15395.78 60.14 0.00 0.00 0.00 0.00 0.00 00:14:25.266 =================================================================================================================== 00:14:25.266 Total : 15395.78 60.14 0.00 0.00 0.00 0.00 0.00 00:14:25.266 00:14:26.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.204 Nvme0n1 : 10.00 15417.80 60.23 0.00 0.00 0.00 0.00 0.00 00:14:26.204 =================================================================================================================== 00:14:26.204 Total : 15417.80 60.23 0.00 0.00 0.00 0.00 0.00 00:14:26.204 00:14:26.204 00:14:26.204 Latency(us) 00:14:26.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.204 Nvme0n1 : 10.01 15422.69 60.24 0.00 0.00 8294.39 4563.25 13398.47 00:14:26.204 =================================================================================================================== 00:14:26.204 Total : 15422.69 60.24 0.00 0.00 8294.39 4563.25 13398.47 00:14:26.204 0 00:14:26.204 01:22:17 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 615927 00:14:26.204 01:22:17 -- common/autotest_common.sh@926 -- # '[' -z 615927 ']' 00:14:26.205 01:22:17 -- common/autotest_common.sh@930 -- # kill -0 615927 00:14:26.205 01:22:17 -- common/autotest_common.sh@931 -- # uname 00:14:26.205 01:22:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:26.205 01:22:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 615927 00:14:26.205 01:22:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:26.205 01:22:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:26.205 01:22:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 615927' 00:14:26.205 killing process with pid 615927 00:14:26.205 01:22:17 -- common/autotest_common.sh@945 -- # kill 615927 00:14:26.205 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.205 00:14:26.205 Latency(us) 00:14:26.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.205 =================================================================================================================== 00:14:26.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.205 01:22:17 -- common/autotest_common.sh@950 -- # wait 615927 00:14:26.463 01:22:18 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:27.033 01:22:18 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4747446-6120-446a-a34e-1970cc535eff 00:14:27.033 01:22:18 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:27.033 01:22:18 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:27.033 01:22:18 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:27.033 01:22:18 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:27.293 [2024-07-27 01:22:18.944750] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:27.293 01:22:18 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4747446-6120-446a-a34e-1970cc535eff 00:14:27.293 01:22:18 -- common/autotest_common.sh@640 -- # local es=0 00:14:27.293 01:22:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4747446-6120-446a-a34e-1970cc535eff 00:14:27.293 01:22:18 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.293 01:22:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:27.293 01:22:18 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.293 01:22:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:27.293 01:22:18 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.293 01:22:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:27.293 01:22:18 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.293 01:22:18 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:27.293 01:22:18 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4747446-6120-446a-a34e-1970cc535eff 00:14:27.553 request: 00:14:27.553 { 00:14:27.553 "uuid": "d4747446-6120-446a-a34e-1970cc535eff", 00:14:27.553 "method": "bdev_lvol_get_lvstores", 00:14:27.553 "req_id": 1 00:14:27.553 } 00:14:27.553 Got JSON-RPC error response 00:14:27.553 response: 00:14:27.553 { 00:14:27.553 "code": -19, 00:14:27.553 "message": "No such device" 00:14:27.553 } 00:14:27.553 01:22:19 -- common/autotest_common.sh@643 -- # es=1 00:14:27.553 01:22:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:27.553 01:22:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:27.553 01:22:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:27.553 01:22:19 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:27.812 aio_bdev 00:14:27.812 01:22:19 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 53879b3a-0a95-4ddf-b26c-397abe219b62 00:14:27.812 01:22:19 -- common/autotest_common.sh@887 -- # local bdev_name=53879b3a-0a95-4ddf-b26c-397abe219b62 00:14:27.812 01:22:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:27.812 01:22:19 -- common/autotest_common.sh@889 -- # local i 00:14:27.812 01:22:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:27.812 01:22:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:27.812 01:22:19 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:28.070 01:22:19 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 53879b3a-0a95-4ddf-b26c-397abe219b62 -t 2000 00:14:28.330 [ 00:14:28.330 { 00:14:28.330 "name": "53879b3a-0a95-4ddf-b26c-397abe219b62", 00:14:28.330 "aliases": [ 00:14:28.330 "lvs/lvol" 00:14:28.330 ], 00:14:28.330 "product_name": "Logical Volume", 00:14:28.330 "block_size": 4096, 00:14:28.330 "num_blocks": 38912, 00:14:28.330 "uuid": "53879b3a-0a95-4ddf-b26c-397abe219b62", 00:14:28.330 "assigned_rate_limits": { 00:14:28.330 "rw_ios_per_sec": 0, 00:14:28.330 "rw_mbytes_per_sec": 0, 00:14:28.330 "r_mbytes_per_sec": 0, 00:14:28.330 "w_mbytes_per_sec": 0 00:14:28.330 }, 00:14:28.330 "claimed": false, 00:14:28.330 "zoned": false, 00:14:28.330 "supported_io_types": { 00:14:28.330 "read": true, 00:14:28.330 "write": true, 00:14:28.330 "unmap": true, 00:14:28.330 "write_zeroes": true, 00:14:28.330 "flush": false, 00:14:28.330 "reset": true, 00:14:28.330 "compare": false, 00:14:28.330 "compare_and_write": false, 00:14:28.330 "abort": false, 00:14:28.330 "nvme_admin": false, 00:14:28.330 "nvme_io": false 00:14:28.330 }, 00:14:28.330 "driver_specific": { 00:14:28.330 "lvol": { 00:14:28.330 "lvol_store_uuid": "d4747446-6120-446a-a34e-1970cc535eff", 00:14:28.330 "base_bdev": "aio_bdev", 00:14:28.330 "thin_provision": false, 00:14:28.330 "snapshot": false, 00:14:28.330 "clone": false, 00:14:28.331 "esnap_clone": false 00:14:28.331 } 00:14:28.331 } 00:14:28.331 } 00:14:28.331 ] 00:14:28.331 01:22:19 -- common/autotest_common.sh@895 -- # return 0 00:14:28.331 01:22:19 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4747446-6120-446a-a34e-1970cc535eff 00:14:28.331 01:22:19 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:28.589 01:22:20 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:28.589 01:22:20 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4747446-6120-446a-a34e-1970cc535eff 00:14:28.589 01:22:20 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:28.848 01:22:20 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:28.848 01:22:20 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 53879b3a-0a95-4ddf-b26c-397abe219b62 00:14:29.108 01:22:20 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d4747446-6120-446a-a34e-1970cc535eff 00:14:29.366 01:22:20 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:29.624 00:14:29.624 real 0m17.528s 00:14:29.624 user 0m17.105s 00:14:29.624 sys 0m1.942s 00:14:29.624 01:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.624 01:22:21 -- common/autotest_common.sh@10 -- # set +x 00:14:29.624 ************************************ 00:14:29.624 END TEST lvs_grow_clean 00:14:29.624 ************************************ 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:29.624 01:22:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:29.624 01:22:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:29.624 01:22:21 -- common/autotest_common.sh@10 -- # set +x 00:14:29.624 ************************************ 00:14:29.624 START TEST lvs_grow_dirty 00:14:29.624 ************************************ 00:14:29.624 01:22:21 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:29.624 01:22:21 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:29.883 01:22:21 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:29.883 01:22:21 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:30.142 01:22:21 -- target/nvmf_lvs_grow.sh@28 -- # lvs=af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:30.142 01:22:21 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:30.142 01:22:21 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:30.402 01:22:21 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:30.402 01:22:21 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:30.402 01:22:21 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 lvol 150 00:14:30.662 01:22:22 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5ac99d8a-f130-4447-a964-bad3f7d06234 00:14:30.662 01:22:22 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.662 01:22:22 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:30.957 [2024-07-27 01:22:22.421328] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:30.957 [2024-07-27 01:22:22.421440] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:30.957 true 00:14:30.957 01:22:22 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:30.957 01:22:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:31.216 01:22:22 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:31.216 01:22:22 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:31.476 01:22:22 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ac99d8a-f130-4447-a964-bad3f7d06234 00:14:31.476 01:22:23 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:31.735 01:22:23 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.993 01:22:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=618065 00:14:31.993 01:22:23 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:31.993 01:22:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.993 01:22:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 618065 /var/tmp/bdevperf.sock 00:14:31.993 01:22:23 -- common/autotest_common.sh@819 -- # '[' -z 618065 ']' 00:14:31.993 01:22:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.993 01:22:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:31.993 01:22:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.993 01:22:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:31.993 01:22:23 -- common/autotest_common.sh@10 -- # set +x 00:14:31.993 [2024-07-27 01:22:23.739649] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:31.993 [2024-07-27 01:22:23.739733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618065 ] 00:14:32.251 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.251 [2024-07-27 01:22:23.802503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.251 [2024-07-27 01:22:23.918335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.183 01:22:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:33.183 01:22:24 -- common/autotest_common.sh@852 -- # return 0 00:14:33.183 01:22:24 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:33.442 Nvme0n1 00:14:33.442 01:22:25 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:33.701 [ 00:14:33.701 { 00:14:33.701 "name": "Nvme0n1", 00:14:33.701 "aliases": [ 00:14:33.701 "5ac99d8a-f130-4447-a964-bad3f7d06234" 00:14:33.701 ], 00:14:33.701 "product_name": "NVMe disk", 00:14:33.701 "block_size": 4096, 00:14:33.701 "num_blocks": 38912, 00:14:33.701 "uuid": "5ac99d8a-f130-4447-a964-bad3f7d06234", 00:14:33.701 "assigned_rate_limits": { 00:14:33.701 "rw_ios_per_sec": 0, 00:14:33.701 "rw_mbytes_per_sec": 0, 00:14:33.701 "r_mbytes_per_sec": 0, 00:14:33.701 "w_mbytes_per_sec": 0 00:14:33.701 }, 00:14:33.701 "claimed": false, 00:14:33.701 "zoned": false, 00:14:33.701 "supported_io_types": { 00:14:33.701 "read": true, 00:14:33.701 "write": true, 00:14:33.701 "unmap": true, 00:14:33.701 "write_zeroes": true, 00:14:33.701 "flush": true, 00:14:33.701 "reset": true, 00:14:33.701 "compare": true, 00:14:33.701 "compare_and_write": true, 00:14:33.701 "abort": true, 00:14:33.701 "nvme_admin": true, 00:14:33.701 "nvme_io": true 00:14:33.701 }, 00:14:33.701 "driver_specific": { 00:14:33.701 "nvme": [ 00:14:33.701 { 00:14:33.701 "trid": { 00:14:33.701 "trtype": "TCP", 00:14:33.701 "adrfam": "IPv4", 00:14:33.701 "traddr": "10.0.0.2", 00:14:33.701 "trsvcid": "4420", 00:14:33.701 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:33.701 }, 00:14:33.701 "ctrlr_data": { 00:14:33.702 "cntlid": 1, 00:14:33.702 "vendor_id": "0x8086", 00:14:33.702 "model_number": "SPDK bdev Controller", 00:14:33.702 "serial_number": "SPDK0", 00:14:33.702 "firmware_revision": "24.01.1", 00:14:33.702 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:33.702 "oacs": { 00:14:33.702 "security": 0, 00:14:33.702 "format": 0, 00:14:33.702 "firmware": 0, 00:14:33.702 "ns_manage": 0 00:14:33.702 }, 00:14:33.702 "multi_ctrlr": true, 00:14:33.702 "ana_reporting": false 00:14:33.702 }, 00:14:33.702 "vs": { 00:14:33.702 "nvme_version": "1.3" 00:14:33.702 }, 00:14:33.702 "ns_data": { 00:14:33.702 "id": 1, 00:14:33.702 "can_share": true 00:14:33.702 } 00:14:33.702 } 00:14:33.702 ], 00:14:33.702 "mp_policy": "active_passive" 00:14:33.702 } 00:14:33.702 } 00:14:33.702 ] 00:14:33.961 01:22:25 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=618336 00:14:33.961 01:22:25 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:33.961 01:22:25 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.961 Running I/O for 10 seconds... 00:14:34.899 Latency(us) 00:14:34.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.899 Nvme0n1 : 1.00 14798.00 57.80 0.00 0.00 0.00 0.00 0.00 00:14:34.899 =================================================================================================================== 00:14:34.899 Total : 14798.00 57.80 0.00 0.00 0.00 0.00 0.00 00:14:34.899 00:14:35.832 01:22:27 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:35.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.832 Nvme0n1 : 2.00 14823.00 57.90 0.00 0.00 0.00 0.00 0.00 00:14:35.832 =================================================================================================================== 00:14:35.832 Total : 14823.00 57.90 0.00 0.00 0.00 0.00 0.00 00:14:35.832 00:14:36.090 true 00:14:36.090 01:22:27 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:36.090 01:22:27 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:36.349 01:22:28 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:36.349 01:22:28 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:36.349 01:22:28 -- target/nvmf_lvs_grow.sh@65 -- # wait 618336 00:14:36.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.915 Nvme0n1 : 3.00 14858.67 58.04 0.00 0.00 0.00 0.00 0.00 00:14:36.915 =================================================================================================================== 00:14:36.915 Total : 14858.67 58.04 0.00 0.00 0.00 0.00 0.00 00:14:36.915 00:14:37.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.854 Nvme0n1 : 4.00 14915.75 58.26 0.00 0.00 0.00 0.00 0.00 00:14:37.854 =================================================================================================================== 00:14:37.854 Total : 14915.75 58.26 0.00 0.00 0.00 0.00 0.00 00:14:37.854 00:14:39.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.231 Nvme0n1 : 5.00 14953.40 58.41 0.00 0.00 0.00 0.00 0.00 00:14:39.231 =================================================================================================================== 00:14:39.231 Total : 14953.40 58.41 0.00 0.00 0.00 0.00 0.00 00:14:39.231 00:14:40.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.169 Nvme0n1 : 6.00 14999.83 58.59 0.00 0.00 0.00 0.00 0.00 00:14:40.169 =================================================================================================================== 00:14:40.169 Total : 14999.83 58.59 0.00 0.00 0.00 0.00 0.00 00:14:40.169 00:14:41.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.108 Nvme0n1 : 7.00 15033.00 58.72 0.00 0.00 0.00 0.00 0.00 00:14:41.108 =================================================================================================================== 00:14:41.108 Total : 15033.00 58.72 0.00 0.00 0.00 0.00 0.00 00:14:41.108 00:14:42.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.066 Nvme0n1 : 8.00 15065.75 58.85 0.00 0.00 0.00 0.00 0.00 00:14:42.066 =================================================================================================================== 00:14:42.066 Total : 15065.75 58.85 0.00 0.00 0.00 0.00 0.00 00:14:42.066 00:14:43.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.004 Nvme0n1 : 9.00 15084.22 58.92 0.00 0.00 0.00 0.00 0.00 00:14:43.004 =================================================================================================================== 00:14:43.004 Total : 15084.22 58.92 0.00 0.00 0.00 0.00 0.00 00:14:43.004 00:14:43.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.943 Nvme0n1 : 10.00 15105.50 59.01 0.00 0.00 0.00 0.00 0.00 00:14:43.943 =================================================================================================================== 00:14:43.943 Total : 15105.50 59.01 0.00 0.00 0.00 0.00 0.00 00:14:43.943 00:14:43.943 00:14:43.943 Latency(us) 00:14:43.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.943 Nvme0n1 : 10.01 15106.09 59.01 0.00 0.00 8467.70 4102.07 13301.38 00:14:43.943 =================================================================================================================== 00:14:43.943 Total : 15106.09 59.01 0.00 0.00 8467.70 4102.07 13301.38 00:14:43.943 0 00:14:43.943 01:22:35 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 618065 00:14:43.943 01:22:35 -- common/autotest_common.sh@926 -- # '[' -z 618065 ']' 00:14:43.943 01:22:35 -- common/autotest_common.sh@930 -- # kill -0 618065 00:14:43.943 01:22:35 -- common/autotest_common.sh@931 -- # uname 00:14:43.943 01:22:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:43.943 01:22:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 618065 00:14:43.943 01:22:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:43.943 01:22:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:43.943 01:22:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 618065' 00:14:43.943 killing process with pid 618065 00:14:43.943 01:22:35 -- common/autotest_common.sh@945 -- # kill 618065 00:14:43.943 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.943 00:14:43.943 Latency(us) 00:14:43.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.943 =================================================================================================================== 00:14:43.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.943 01:22:35 -- common/autotest_common.sh@950 -- # wait 618065 00:14:44.202 01:22:35 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:44.769 01:22:36 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:44.769 01:22:36 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:45.029 01:22:36 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:45.029 01:22:36 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:45.029 01:22:36 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 615476 00:14:45.029 01:22:36 -- target/nvmf_lvs_grow.sh@74 -- # wait 615476 00:14:45.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 615476 Killed "${NVMF_APP[@]}" "$@" 00:14:45.029 01:22:36 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:45.029 01:22:36 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:45.029 01:22:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:45.029 01:22:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:45.029 01:22:36 -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 01:22:36 -- nvmf/common.sh@469 -- # nvmfpid=619701 00:14:45.029 01:22:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:45.029 01:22:36 -- nvmf/common.sh@470 -- # waitforlisten 619701 00:14:45.029 01:22:36 -- common/autotest_common.sh@819 -- # '[' -z 619701 ']' 00:14:45.029 01:22:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.029 01:22:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:45.029 01:22:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.029 01:22:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:45.029 01:22:36 -- common/autotest_common.sh@10 -- # set +x 00:14:45.029 [2024-07-27 01:22:36.614942] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:45.029 [2024-07-27 01:22:36.615037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.029 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.029 [2024-07-27 01:22:36.680125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.289 [2024-07-27 01:22:36.788216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:45.289 [2024-07-27 01:22:36.788368] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.289 [2024-07-27 01:22:36.788385] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.289 [2024-07-27 01:22:36.788398] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.289 [2024-07-27 01:22:36.788427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.856 01:22:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:45.856 01:22:37 -- common/autotest_common.sh@852 -- # return 0 00:14:45.856 01:22:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:45.856 01:22:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:45.856 01:22:37 -- common/autotest_common.sh@10 -- # set +x 00:14:45.856 01:22:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.856 01:22:37 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:46.116 [2024-07-27 01:22:37.834776] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:46.116 [2024-07-27 01:22:37.834938] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:46.116 [2024-07-27 01:22:37.834988] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:46.116 01:22:37 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:46.116 01:22:37 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 5ac99d8a-f130-4447-a964-bad3f7d06234 00:14:46.116 01:22:37 -- common/autotest_common.sh@887 -- # local bdev_name=5ac99d8a-f130-4447-a964-bad3f7d06234 00:14:46.116 01:22:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:46.116 01:22:37 -- common/autotest_common.sh@889 -- # local i 00:14:46.116 01:22:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:46.116 01:22:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:46.116 01:22:37 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:46.376 01:22:38 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ac99d8a-f130-4447-a964-bad3f7d06234 -t 2000 00:14:46.636 [ 00:14:46.636 { 00:14:46.636 "name": "5ac99d8a-f130-4447-a964-bad3f7d06234", 00:14:46.636 "aliases": [ 00:14:46.636 "lvs/lvol" 00:14:46.636 ], 00:14:46.636 "product_name": "Logical Volume", 00:14:46.636 "block_size": 4096, 00:14:46.636 "num_blocks": 38912, 00:14:46.636 "uuid": "5ac99d8a-f130-4447-a964-bad3f7d06234", 00:14:46.636 "assigned_rate_limits": { 00:14:46.636 "rw_ios_per_sec": 0, 00:14:46.636 "rw_mbytes_per_sec": 0, 00:14:46.636 "r_mbytes_per_sec": 0, 00:14:46.636 "w_mbytes_per_sec": 0 00:14:46.636 }, 00:14:46.636 "claimed": false, 00:14:46.636 "zoned": false, 00:14:46.636 "supported_io_types": { 00:14:46.636 "read": true, 00:14:46.637 "write": true, 00:14:46.637 "unmap": true, 00:14:46.637 "write_zeroes": true, 00:14:46.637 "flush": false, 00:14:46.637 "reset": true, 00:14:46.637 "compare": false, 00:14:46.637 "compare_and_write": false, 00:14:46.637 "abort": false, 00:14:46.637 "nvme_admin": false, 00:14:46.637 "nvme_io": false 00:14:46.637 }, 00:14:46.637 "driver_specific": { 00:14:46.637 "lvol": { 00:14:46.637 "lvol_store_uuid": "af5e48d1-6bba-4fbb-bd72-8ce08414d9f2", 00:14:46.637 "base_bdev": "aio_bdev", 00:14:46.637 "thin_provision": false, 00:14:46.637 "snapshot": false, 00:14:46.637 "clone": false, 00:14:46.637 "esnap_clone": false 00:14:46.637 } 00:14:46.637 } 00:14:46.637 } 00:14:46.637 ] 00:14:46.637 01:22:38 -- common/autotest_common.sh@895 -- # return 0 00:14:46.637 01:22:38 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:46.637 01:22:38 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:46.897 01:22:38 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:46.897 01:22:38 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:46.897 01:22:38 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:47.156 01:22:38 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:47.156 01:22:38 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:47.416 [2024-07-27 01:22:39.067834] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:47.416 01:22:39 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:47.416 01:22:39 -- common/autotest_common.sh@640 -- # local es=0 00:14:47.416 01:22:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:47.416 01:22:39 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.416 01:22:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:47.416 01:22:39 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.416 01:22:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:47.416 01:22:39 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.416 01:22:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:47.416 01:22:39 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.416 01:22:39 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:47.416 01:22:39 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:47.676 request: 00:14:47.676 { 00:14:47.676 "uuid": "af5e48d1-6bba-4fbb-bd72-8ce08414d9f2", 00:14:47.676 "method": "bdev_lvol_get_lvstores", 00:14:47.676 "req_id": 1 00:14:47.676 } 00:14:47.676 Got JSON-RPC error response 00:14:47.676 response: 00:14:47.676 { 00:14:47.676 "code": -19, 00:14:47.676 "message": "No such device" 00:14:47.676 } 00:14:47.676 01:22:39 -- common/autotest_common.sh@643 -- # es=1 00:14:47.676 01:22:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:47.676 01:22:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:47.676 01:22:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:47.676 01:22:39 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:47.939 aio_bdev 00:14:47.939 01:22:39 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5ac99d8a-f130-4447-a964-bad3f7d06234 00:14:47.939 01:22:39 -- common/autotest_common.sh@887 -- # local bdev_name=5ac99d8a-f130-4447-a964-bad3f7d06234 00:14:47.939 01:22:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:47.939 01:22:39 -- common/autotest_common.sh@889 -- # local i 00:14:47.939 01:22:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:47.939 01:22:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:47.939 01:22:39 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:48.241 01:22:39 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ac99d8a-f130-4447-a964-bad3f7d06234 -t 2000 00:14:48.500 [ 00:14:48.500 { 00:14:48.500 "name": "5ac99d8a-f130-4447-a964-bad3f7d06234", 00:14:48.500 "aliases": [ 00:14:48.500 "lvs/lvol" 00:14:48.500 ], 00:14:48.500 "product_name": "Logical Volume", 00:14:48.500 "block_size": 4096, 00:14:48.500 "num_blocks": 38912, 00:14:48.500 "uuid": "5ac99d8a-f130-4447-a964-bad3f7d06234", 00:14:48.500 "assigned_rate_limits": { 00:14:48.500 "rw_ios_per_sec": 0, 00:14:48.500 "rw_mbytes_per_sec": 0, 00:14:48.500 "r_mbytes_per_sec": 0, 00:14:48.500 "w_mbytes_per_sec": 0 00:14:48.500 }, 00:14:48.500 "claimed": false, 00:14:48.500 "zoned": false, 00:14:48.500 "supported_io_types": { 00:14:48.500 "read": true, 00:14:48.500 "write": true, 00:14:48.500 "unmap": true, 00:14:48.500 "write_zeroes": true, 00:14:48.500 "flush": false, 00:14:48.500 "reset": true, 00:14:48.500 "compare": false, 00:14:48.500 "compare_and_write": false, 00:14:48.500 "abort": false, 00:14:48.500 "nvme_admin": false, 00:14:48.500 "nvme_io": false 00:14:48.500 }, 00:14:48.500 "driver_specific": { 00:14:48.500 "lvol": { 00:14:48.500 "lvol_store_uuid": "af5e48d1-6bba-4fbb-bd72-8ce08414d9f2", 00:14:48.500 "base_bdev": "aio_bdev", 00:14:48.500 "thin_provision": false, 00:14:48.500 "snapshot": false, 00:14:48.500 "clone": false, 00:14:48.500 "esnap_clone": false 00:14:48.500 } 00:14:48.500 } 00:14:48.500 } 00:14:48.500 ] 00:14:48.500 01:22:40 -- common/autotest_common.sh@895 -- # return 0 00:14:48.500 01:22:40 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:48.500 01:22:40 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:48.758 01:22:40 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:48.758 01:22:40 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:48.758 01:22:40 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:49.017 01:22:40 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:49.017 01:22:40 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ac99d8a-f130-4447-a964-bad3f7d06234 00:14:49.277 01:22:40 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af5e48d1-6bba-4fbb-bd72-8ce08414d9f2 00:14:49.537 01:22:41 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:49.797 01:22:41 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:49.797 00:14:49.797 real 0m20.197s 00:14:49.797 user 0m50.221s 00:14:49.797 sys 0m4.846s 00:14:49.797 01:22:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.797 01:22:41 -- common/autotest_common.sh@10 -- # set +x 00:14:49.797 ************************************ 00:14:49.797 END TEST lvs_grow_dirty 00:14:49.797 ************************************ 00:14:49.797 01:22:41 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:49.797 01:22:41 -- common/autotest_common.sh@796 -- # type=--id 00:14:49.797 01:22:41 -- common/autotest_common.sh@797 -- # id=0 00:14:49.797 01:22:41 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:49.797 01:22:41 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:49.797 01:22:41 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:49.797 01:22:41 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:49.797 01:22:41 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:49.797 01:22:41 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:49.797 nvmf_trace.0 00:14:49.797 01:22:41 -- common/autotest_common.sh@811 -- # return 0 00:14:49.797 01:22:41 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:49.797 01:22:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:49.797 01:22:41 -- nvmf/common.sh@116 -- # sync 00:14:49.797 01:22:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:49.797 01:22:41 -- nvmf/common.sh@119 -- # set +e 00:14:49.797 01:22:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:49.797 01:22:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:49.797 rmmod nvme_tcp 00:14:49.797 rmmod nvme_fabrics 00:14:49.797 rmmod nvme_keyring 00:14:49.797 01:22:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:49.797 01:22:41 -- nvmf/common.sh@123 -- # set -e 00:14:49.797 01:22:41 -- nvmf/common.sh@124 -- # return 0 00:14:49.797 01:22:41 -- nvmf/common.sh@477 -- # '[' -n 619701 ']' 00:14:49.797 01:22:41 -- nvmf/common.sh@478 -- # killprocess 619701 00:14:49.797 01:22:41 -- common/autotest_common.sh@926 -- # '[' -z 619701 ']' 00:14:49.797 01:22:41 -- common/autotest_common.sh@930 -- # kill -0 619701 00:14:49.797 01:22:41 -- common/autotest_common.sh@931 -- # uname 00:14:49.797 01:22:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:49.797 01:22:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 619701 00:14:49.797 01:22:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:49.797 01:22:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:49.797 01:22:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 619701' 00:14:49.797 killing process with pid 619701 00:14:49.797 01:22:41 -- common/autotest_common.sh@945 -- # kill 619701 00:14:49.797 01:22:41 -- common/autotest_common.sh@950 -- # wait 619701 00:14:50.056 01:22:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:50.056 01:22:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:50.056 01:22:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:50.056 01:22:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.056 01:22:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:50.056 01:22:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.056 01:22:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.056 01:22:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.590 01:22:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:52.590 00:14:52.590 real 0m43.647s 00:14:52.590 user 1m13.742s 00:14:52.590 sys 0m8.640s 00:14:52.590 01:22:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.590 01:22:43 -- common/autotest_common.sh@10 -- # set +x 00:14:52.590 ************************************ 00:14:52.590 END TEST nvmf_lvs_grow 00:14:52.590 ************************************ 00:14:52.590 01:22:43 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:52.590 01:22:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:52.590 01:22:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:52.590 01:22:43 -- common/autotest_common.sh@10 -- # set +x 00:14:52.590 ************************************ 00:14:52.590 START TEST nvmf_bdev_io_wait 00:14:52.590 ************************************ 00:14:52.590 01:22:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:52.590 * Looking for test storage... 00:14:52.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.590 01:22:43 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.590 01:22:43 -- nvmf/common.sh@7 -- # uname -s 00:14:52.590 01:22:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.590 01:22:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.590 01:22:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.590 01:22:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.590 01:22:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.590 01:22:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.590 01:22:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.590 01:22:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.590 01:22:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.590 01:22:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.590 01:22:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.590 01:22:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.590 01:22:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.590 01:22:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.590 01:22:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.590 01:22:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.590 01:22:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.590 01:22:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.590 01:22:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.590 01:22:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.590 01:22:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.590 01:22:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.590 01:22:43 -- paths/export.sh@5 -- # export PATH 00:14:52.591 01:22:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.591 01:22:43 -- nvmf/common.sh@46 -- # : 0 00:14:52.591 01:22:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:52.591 01:22:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:52.591 01:22:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:52.591 01:22:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.591 01:22:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.591 01:22:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:52.591 01:22:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:52.591 01:22:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:52.591 01:22:43 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.591 01:22:43 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.591 01:22:43 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:52.591 01:22:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:52.591 01:22:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.591 01:22:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:52.591 01:22:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:52.591 01:22:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:52.591 01:22:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.591 01:22:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.591 01:22:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.591 01:22:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:52.591 01:22:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:52.591 01:22:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:52.591 01:22:43 -- common/autotest_common.sh@10 -- # set +x 00:14:54.493 01:22:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:54.493 01:22:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:54.493 01:22:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:54.493 01:22:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:54.493 01:22:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:54.493 01:22:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:54.493 01:22:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:54.493 01:22:45 -- nvmf/common.sh@294 -- # net_devs=() 00:14:54.493 01:22:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:54.493 01:22:45 -- nvmf/common.sh@295 -- # e810=() 00:14:54.493 01:22:45 -- nvmf/common.sh@295 -- # local -ga e810 00:14:54.493 01:22:45 -- nvmf/common.sh@296 -- # x722=() 00:14:54.493 01:22:45 -- nvmf/common.sh@296 -- # local -ga x722 00:14:54.493 01:22:45 -- nvmf/common.sh@297 -- # mlx=() 00:14:54.493 01:22:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:54.493 01:22:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.493 01:22:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:54.493 01:22:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:54.493 01:22:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:54.493 01:22:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:54.493 01:22:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:54.493 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:54.493 01:22:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:54.493 01:22:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:54.493 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:54.493 01:22:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:54.493 01:22:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:54.493 01:22:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.493 01:22:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:54.493 01:22:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.493 01:22:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:54.493 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:54.493 01:22:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.493 01:22:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:54.493 01:22:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.493 01:22:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:54.493 01:22:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.493 01:22:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:54.493 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:54.493 01:22:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.493 01:22:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:54.493 01:22:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:54.493 01:22:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:54.493 01:22:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:54.493 01:22:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.493 01:22:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.493 01:22:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.493 01:22:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:54.493 01:22:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.493 01:22:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.493 01:22:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:54.493 01:22:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.493 01:22:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.493 01:22:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:54.493 01:22:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:54.493 01:22:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.493 01:22:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.493 01:22:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.493 01:22:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.493 01:22:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:54.493 01:22:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.493 01:22:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.493 01:22:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.493 01:22:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:54.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:14:54.493 00:14:54.493 --- 10.0.0.2 ping statistics --- 00:14:54.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.493 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:14:54.493 01:22:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:14:54.493 00:14:54.493 --- 10.0.0.1 ping statistics --- 00:14:54.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.493 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:14:54.494 01:22:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.494 01:22:46 -- nvmf/common.sh@410 -- # return 0 00:14:54.494 01:22:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:54.494 01:22:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.494 01:22:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:54.494 01:22:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:54.494 01:22:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.494 01:22:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:54.494 01:22:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:54.494 01:22:46 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:54.494 01:22:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:54.494 01:22:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:54.494 01:22:46 -- common/autotest_common.sh@10 -- # set +x 00:14:54.494 01:22:46 -- nvmf/common.sh@469 -- # nvmfpid=622255 00:14:54.494 01:22:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:54.494 01:22:46 -- nvmf/common.sh@470 -- # waitforlisten 622255 00:14:54.494 01:22:46 -- common/autotest_common.sh@819 -- # '[' -z 622255 ']' 00:14:54.494 01:22:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.494 01:22:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:54.494 01:22:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.494 01:22:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:54.494 01:22:46 -- common/autotest_common.sh@10 -- # set +x 00:14:54.494 [2024-07-27 01:22:46.082455] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:54.494 [2024-07-27 01:22:46.082522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.494 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.494 [2024-07-27 01:22:46.147916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.752 [2024-07-27 01:22:46.255547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:54.752 [2024-07-27 01:22:46.255696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.752 [2024-07-27 01:22:46.255728] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.752 [2024-07-27 01:22:46.255741] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.752 [2024-07-27 01:22:46.255795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.752 [2024-07-27 01:22:46.255820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.752 [2024-07-27 01:22:46.255885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.752 [2024-07-27 01:22:46.255888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.320 01:22:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:55.320 01:22:47 -- common/autotest_common.sh@852 -- # return 0 00:14:55.320 01:22:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:55.320 01:22:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:55.320 01:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:55.320 01:22:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.320 01:22:47 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:55.320 01:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.320 01:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:55.320 01:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.320 01:22:47 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:55.320 01:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.320 01:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:55.579 01:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:55.579 01:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.579 01:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:55.579 [2024-07-27 01:22:47.139089] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.579 01:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:55.579 01:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.579 01:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:55.579 Malloc0 00:14:55.579 01:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:55.579 01:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.579 01:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:55.579 01:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.579 01:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.579 01:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:55.579 01:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.579 01:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.579 01:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:55.579 [2024-07-27 01:22:47.201868] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.579 01:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=622416 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@30 -- # READ_PID=622418 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:55.579 01:22:47 -- nvmf/common.sh@520 -- # config=() 00:14:55.579 01:22:47 -- nvmf/common.sh@520 -- # local subsystem config 00:14:55.579 01:22:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:55.579 01:22:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:55.579 { 00:14:55.579 "params": { 00:14:55.579 "name": "Nvme$subsystem", 00:14:55.579 "trtype": "$TEST_TRANSPORT", 00:14:55.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.579 "adrfam": "ipv4", 00:14:55.579 "trsvcid": "$NVMF_PORT", 00:14:55.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.579 "hdgst": ${hdgst:-false}, 00:14:55.579 "ddgst": ${ddgst:-false} 00:14:55.579 }, 00:14:55.579 "method": "bdev_nvme_attach_controller" 00:14:55.579 } 00:14:55.579 EOF 00:14:55.579 )") 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=622420 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:55.579 01:22:47 -- nvmf/common.sh@520 -- # config=() 00:14:55.579 01:22:47 -- nvmf/common.sh@520 -- # local subsystem config 00:14:55.579 01:22:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:55.579 01:22:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:55.579 { 00:14:55.579 "params": { 00:14:55.579 "name": "Nvme$subsystem", 00:14:55.579 "trtype": "$TEST_TRANSPORT", 00:14:55.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.579 "adrfam": "ipv4", 00:14:55.579 "trsvcid": "$NVMF_PORT", 00:14:55.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.579 "hdgst": ${hdgst:-false}, 00:14:55.579 "ddgst": ${ddgst:-false} 00:14:55.579 }, 00:14:55.579 "method": "bdev_nvme_attach_controller" 00:14:55.579 } 00:14:55.579 EOF 00:14:55.579 )") 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=622423 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@35 -- # sync 00:14:55.579 01:22:47 -- nvmf/common.sh@542 -- # cat 00:14:55.579 01:22:47 -- nvmf/common.sh@520 -- # config=() 00:14:55.579 01:22:47 -- nvmf/common.sh@520 -- # local subsystem config 00:14:55.579 01:22:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:55.579 01:22:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:55.579 { 00:14:55.579 "params": { 00:14:55.579 "name": "Nvme$subsystem", 00:14:55.579 "trtype": "$TEST_TRANSPORT", 00:14:55.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.579 "adrfam": "ipv4", 00:14:55.579 "trsvcid": "$NVMF_PORT", 00:14:55.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.579 "hdgst": ${hdgst:-false}, 00:14:55.579 "ddgst": ${ddgst:-false} 00:14:55.579 }, 00:14:55.579 "method": "bdev_nvme_attach_controller" 00:14:55.579 } 00:14:55.579 EOF 00:14:55.579 )") 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:55.579 01:22:47 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:55.579 01:22:47 -- nvmf/common.sh@520 -- # config=() 00:14:55.579 01:22:47 -- nvmf/common.sh@542 -- # cat 00:14:55.579 01:22:47 -- nvmf/common.sh@520 -- # local subsystem config 00:14:55.579 01:22:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:55.579 01:22:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:55.579 { 00:14:55.579 "params": { 00:14:55.579 "name": "Nvme$subsystem", 00:14:55.580 "trtype": "$TEST_TRANSPORT", 00:14:55.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.580 "adrfam": "ipv4", 00:14:55.580 "trsvcid": "$NVMF_PORT", 00:14:55.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.580 "hdgst": ${hdgst:-false}, 00:14:55.580 "ddgst": ${ddgst:-false} 00:14:55.580 }, 00:14:55.580 "method": "bdev_nvme_attach_controller" 00:14:55.580 } 00:14:55.580 EOF 00:14:55.580 )") 00:14:55.580 01:22:47 -- nvmf/common.sh@542 -- # cat 00:14:55.580 01:22:47 -- target/bdev_io_wait.sh@37 -- # wait 622416 00:14:55.580 01:22:47 -- nvmf/common.sh@542 -- # cat 00:14:55.580 01:22:47 -- nvmf/common.sh@544 -- # jq . 00:14:55.580 01:22:47 -- nvmf/common.sh@544 -- # jq . 00:14:55.580 01:22:47 -- nvmf/common.sh@545 -- # IFS=, 00:14:55.580 01:22:47 -- nvmf/common.sh@544 -- # jq . 00:14:55.580 01:22:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:55.580 "params": { 00:14:55.580 "name": "Nvme1", 00:14:55.580 "trtype": "tcp", 00:14:55.580 "traddr": "10.0.0.2", 00:14:55.580 "adrfam": "ipv4", 00:14:55.580 "trsvcid": "4420", 00:14:55.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.580 "hdgst": false, 00:14:55.580 "ddgst": false 00:14:55.580 }, 00:14:55.580 "method": "bdev_nvme_attach_controller" 00:14:55.580 }' 00:14:55.580 01:22:47 -- nvmf/common.sh@544 -- # jq . 00:14:55.580 01:22:47 -- nvmf/common.sh@545 -- # IFS=, 00:14:55.580 01:22:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:55.580 "params": { 00:14:55.580 "name": "Nvme1", 00:14:55.580 "trtype": "tcp", 00:14:55.580 "traddr": "10.0.0.2", 00:14:55.580 "adrfam": "ipv4", 00:14:55.580 "trsvcid": "4420", 00:14:55.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.580 "hdgst": false, 00:14:55.580 "ddgst": false 00:14:55.580 }, 00:14:55.580 "method": "bdev_nvme_attach_controller" 00:14:55.580 }' 00:14:55.580 01:22:47 -- nvmf/common.sh@545 -- # IFS=, 00:14:55.580 01:22:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:55.580 "params": { 00:14:55.580 "name": "Nvme1", 00:14:55.580 "trtype": "tcp", 00:14:55.580 "traddr": "10.0.0.2", 00:14:55.580 "adrfam": "ipv4", 00:14:55.580 "trsvcid": "4420", 00:14:55.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.580 "hdgst": false, 00:14:55.580 "ddgst": false 00:14:55.580 }, 00:14:55.580 "method": "bdev_nvme_attach_controller" 00:14:55.580 }' 00:14:55.580 01:22:47 -- nvmf/common.sh@545 -- # IFS=, 00:14:55.580 01:22:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:55.580 "params": { 00:14:55.580 "name": "Nvme1", 00:14:55.580 "trtype": "tcp", 00:14:55.580 "traddr": "10.0.0.2", 00:14:55.580 "adrfam": "ipv4", 00:14:55.580 "trsvcid": "4420", 00:14:55.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.580 "hdgst": false, 00:14:55.580 "ddgst": false 00:14:55.580 }, 00:14:55.580 "method": "bdev_nvme_attach_controller" 00:14:55.580 }' 00:14:55.580 [2024-07-27 01:22:47.244119] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:55.580 [2024-07-27 01:22:47.244120] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:55.580 [2024-07-27 01:22:47.244119] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:55.580 [2024-07-27 01:22:47.244121] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:55.580 [2024-07-27 01:22:47.244206] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-27 01:22:47.244208] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-27 01:22:47.244207] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-27 01:22:47.244207] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:55.580 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:55.580 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:55.580 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:55.580 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.838 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.838 [2024-07-27 01:22:47.410391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.838 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.838 [2024-07-27 01:22:47.504269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:55.838 [2024-07-27 01:22:47.506489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.838 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.095 [2024-07-27 01:22:47.604332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:56.095 [2024-07-27 01:22:47.609667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.095 [2024-07-27 01:22:47.706253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:56.095 [2024-07-27 01:22:47.713940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.095 [2024-07-27 01:22:47.803859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:56.354 Running I/O for 1 seconds... 00:14:56.354 Running I/O for 1 seconds... 00:14:56.354 Running I/O for 1 seconds... 00:14:56.354 Running I/O for 1 seconds... 00:14:57.292 00:14:57.292 Latency(us) 00:14:57.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.292 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:57.292 Nvme1n1 : 1.01 10321.03 40.32 0.00 0.00 12342.92 5679.79 25826.04 00:14:57.292 =================================================================================================================== 00:14:57.292 Total : 10321.03 40.32 0.00 0.00 12342.92 5679.79 25826.04 00:14:57.292 00:14:57.292 Latency(us) 00:14:57.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.292 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:57.292 Nvme1n1 : 1.00 200846.72 784.56 0.00 0.00 634.89 259.41 867.75 00:14:57.292 =================================================================================================================== 00:14:57.292 Total : 200846.72 784.56 0.00 0.00 634.89 259.41 867.75 00:14:57.292 00:14:57.292 Latency(us) 00:14:57.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.292 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:57.292 Nvme1n1 : 1.02 5653.11 22.08 0.00 0.00 22436.18 11845.03 36505.98 00:14:57.292 =================================================================================================================== 00:14:57.292 Total : 5653.11 22.08 0.00 0.00 22436.18 11845.03 36505.98 00:14:57.292 00:14:57.292 Latency(us) 00:14:57.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.292 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:57.292 Nvme1n1 : 1.01 5812.20 22.70 0.00 0.00 21933.70 7670.14 46603.38 00:14:57.292 =================================================================================================================== 00:14:57.292 Total : 5812.20 22.70 0.00 0.00 21933.70 7670.14 46603.38 00:14:57.860 01:22:49 -- target/bdev_io_wait.sh@38 -- # wait 622418 00:14:57.860 01:22:49 -- target/bdev_io_wait.sh@39 -- # wait 622420 00:14:57.860 01:22:49 -- target/bdev_io_wait.sh@40 -- # wait 622423 00:14:57.860 01:22:49 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.860 01:22:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.861 01:22:49 -- common/autotest_common.sh@10 -- # set +x 00:14:57.861 01:22:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.861 01:22:49 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:57.861 01:22:49 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:57.861 01:22:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:57.861 01:22:49 -- nvmf/common.sh@116 -- # sync 00:14:57.861 01:22:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:57.861 01:22:49 -- nvmf/common.sh@119 -- # set +e 00:14:57.861 01:22:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:57.861 01:22:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:57.861 rmmod nvme_tcp 00:14:57.861 rmmod nvme_fabrics 00:14:57.861 rmmod nvme_keyring 00:14:57.861 01:22:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:57.861 01:22:49 -- nvmf/common.sh@123 -- # set -e 00:14:57.861 01:22:49 -- nvmf/common.sh@124 -- # return 0 00:14:57.861 01:22:49 -- nvmf/common.sh@477 -- # '[' -n 622255 ']' 00:14:57.861 01:22:49 -- nvmf/common.sh@478 -- # killprocess 622255 00:14:57.861 01:22:49 -- common/autotest_common.sh@926 -- # '[' -z 622255 ']' 00:14:57.861 01:22:49 -- common/autotest_common.sh@930 -- # kill -0 622255 00:14:57.861 01:22:49 -- common/autotest_common.sh@931 -- # uname 00:14:57.861 01:22:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:57.861 01:22:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 622255 00:14:57.861 01:22:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:57.861 01:22:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:57.861 01:22:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 622255' 00:14:57.861 killing process with pid 622255 00:14:57.861 01:22:49 -- common/autotest_common.sh@945 -- # kill 622255 00:14:57.861 01:22:49 -- common/autotest_common.sh@950 -- # wait 622255 00:14:58.120 01:22:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:58.120 01:22:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:58.120 01:22:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:58.120 01:22:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.120 01:22:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:58.120 01:22:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.120 01:22:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.120 01:22:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.025 01:22:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:00.025 00:15:00.025 real 0m7.841s 00:15:00.025 user 0m19.022s 00:15:00.025 sys 0m3.422s 00:15:00.025 01:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.025 01:22:51 -- common/autotest_common.sh@10 -- # set +x 00:15:00.025 ************************************ 00:15:00.025 END TEST nvmf_bdev_io_wait 00:15:00.025 ************************************ 00:15:00.025 01:22:51 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:00.025 01:22:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:00.025 01:22:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.025 01:22:51 -- common/autotest_common.sh@10 -- # set +x 00:15:00.025 ************************************ 00:15:00.025 START TEST nvmf_queue_depth 00:15:00.025 ************************************ 00:15:00.025 01:22:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:00.025 * Looking for test storage... 00:15:00.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.025 01:22:51 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.025 01:22:51 -- nvmf/common.sh@7 -- # uname -s 00:15:00.025 01:22:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.025 01:22:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.025 01:22:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.025 01:22:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.025 01:22:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.025 01:22:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.025 01:22:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.025 01:22:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.025 01:22:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.025 01:22:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.285 01:22:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.285 01:22:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.285 01:22:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.285 01:22:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.285 01:22:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.285 01:22:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.285 01:22:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.285 01:22:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.285 01:22:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.286 01:22:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.286 01:22:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.286 01:22:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.286 01:22:51 -- paths/export.sh@5 -- # export PATH 00:15:00.286 01:22:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.286 01:22:51 -- nvmf/common.sh@46 -- # : 0 00:15:00.286 01:22:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:00.286 01:22:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:00.286 01:22:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:00.286 01:22:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.286 01:22:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.286 01:22:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:00.286 01:22:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:00.286 01:22:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:00.286 01:22:51 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:00.286 01:22:51 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:00.286 01:22:51 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.286 01:22:51 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:00.286 01:22:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:00.286 01:22:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.286 01:22:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:00.286 01:22:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:00.286 01:22:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:00.286 01:22:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.286 01:22:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.286 01:22:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.286 01:22:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:00.286 01:22:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:00.286 01:22:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:00.286 01:22:51 -- common/autotest_common.sh@10 -- # set +x 00:15:02.190 01:22:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:02.190 01:22:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:02.190 01:22:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:02.190 01:22:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:02.190 01:22:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:02.190 01:22:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:02.190 01:22:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:02.190 01:22:53 -- nvmf/common.sh@294 -- # net_devs=() 00:15:02.190 01:22:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:02.190 01:22:53 -- nvmf/common.sh@295 -- # e810=() 00:15:02.190 01:22:53 -- nvmf/common.sh@295 -- # local -ga e810 00:15:02.190 01:22:53 -- nvmf/common.sh@296 -- # x722=() 00:15:02.190 01:22:53 -- nvmf/common.sh@296 -- # local -ga x722 00:15:02.190 01:22:53 -- nvmf/common.sh@297 -- # mlx=() 00:15:02.190 01:22:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:02.190 01:22:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.190 01:22:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:02.190 01:22:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:02.190 01:22:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:02.190 01:22:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.190 01:22:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:02.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:02.190 01:22:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.190 01:22:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:02.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:02.190 01:22:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:02.190 01:22:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.190 01:22:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.190 01:22:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.190 01:22:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.190 01:22:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:02.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:02.190 01:22:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.190 01:22:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.190 01:22:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.190 01:22:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.190 01:22:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.190 01:22:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:02.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:02.190 01:22:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.190 01:22:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:02.190 01:22:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:02.190 01:22:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:02.190 01:22:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:02.190 01:22:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.191 01:22:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.191 01:22:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.191 01:22:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:02.191 01:22:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.191 01:22:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.191 01:22:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:02.191 01:22:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.191 01:22:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.191 01:22:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:02.191 01:22:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:02.191 01:22:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.191 01:22:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.191 01:22:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.191 01:22:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.191 01:22:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:02.191 01:22:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.191 01:22:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.191 01:22:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.191 01:22:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:02.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:15:02.191 00:15:02.191 --- 10.0.0.2 ping statistics --- 00:15:02.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.191 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:15:02.191 01:22:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:15:02.191 00:15:02.191 --- 10.0.0.1 ping statistics --- 00:15:02.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.191 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:02.191 01:22:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.191 01:22:53 -- nvmf/common.sh@410 -- # return 0 00:15:02.191 01:22:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:02.191 01:22:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.191 01:22:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:02.191 01:22:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:02.191 01:22:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.191 01:22:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:02.191 01:22:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:02.191 01:22:53 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:02.191 01:22:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.191 01:22:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:02.191 01:22:53 -- common/autotest_common.sh@10 -- # set +x 00:15:02.191 01:22:53 -- nvmf/common.sh@469 -- # nvmfpid=624662 00:15:02.191 01:22:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:02.191 01:22:53 -- nvmf/common.sh@470 -- # waitforlisten 624662 00:15:02.191 01:22:53 -- common/autotest_common.sh@819 -- # '[' -z 624662 ']' 00:15:02.191 01:22:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.191 01:22:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:02.191 01:22:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.191 01:22:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:02.191 01:22:53 -- common/autotest_common.sh@10 -- # set +x 00:15:02.191 [2024-07-27 01:22:53.891639] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:02.191 [2024-07-27 01:22:53.891710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.191 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.450 [2024-07-27 01:22:53.958685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.450 [2024-07-27 01:22:54.077120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:02.450 [2024-07-27 01:22:54.077300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.450 [2024-07-27 01:22:54.077319] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.450 [2024-07-27 01:22:54.077334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.450 [2024-07-27 01:22:54.077369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.386 01:22:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:03.386 01:22:54 -- common/autotest_common.sh@852 -- # return 0 00:15:03.386 01:22:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:03.386 01:22:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:03.386 01:22:54 -- common/autotest_common.sh@10 -- # set +x 00:15:03.386 01:22:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.386 01:22:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.386 01:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.386 01:22:54 -- common/autotest_common.sh@10 -- # set +x 00:15:03.386 [2024-07-27 01:22:54.925788] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.386 01:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.386 01:22:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:03.386 01:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.386 01:22:54 -- common/autotest_common.sh@10 -- # set +x 00:15:03.386 Malloc0 00:15:03.386 01:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.386 01:22:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:03.386 01:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.386 01:22:54 -- common/autotest_common.sh@10 -- # set +x 00:15:03.386 01:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.386 01:22:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:03.386 01:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.386 01:22:54 -- common/autotest_common.sh@10 -- # set +x 00:15:03.386 01:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.386 01:22:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.386 01:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.386 01:22:54 -- common/autotest_common.sh@10 -- # set +x 00:15:03.386 [2024-07-27 01:22:54.983884] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.386 01:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.386 01:22:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=624819 00:15:03.386 01:22:54 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:03.386 01:22:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:03.386 01:22:54 -- target/queue_depth.sh@33 -- # waitforlisten 624819 /var/tmp/bdevperf.sock 00:15:03.386 01:22:54 -- common/autotest_common.sh@819 -- # '[' -z 624819 ']' 00:15:03.386 01:22:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.386 01:22:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:03.386 01:22:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.386 01:22:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:03.386 01:22:54 -- common/autotest_common.sh@10 -- # set +x 00:15:03.386 [2024-07-27 01:22:55.025633] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:03.386 [2024-07-27 01:22:55.025696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624819 ] 00:15:03.386 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.386 [2024-07-27 01:22:55.087683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.646 [2024-07-27 01:22:55.202258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.584 01:22:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:04.584 01:22:56 -- common/autotest_common.sh@852 -- # return 0 00:15:04.584 01:22:56 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:04.584 01:22:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.584 01:22:56 -- common/autotest_common.sh@10 -- # set +x 00:15:04.584 NVMe0n1 00:15:04.584 01:22:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.584 01:22:56 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.584 Running I/O for 10 seconds... 00:15:14.617 00:15:14.617 Latency(us) 00:15:14.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.617 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:14.617 Verification LBA range: start 0x0 length 0x4000 00:15:14.617 NVMe0n1 : 10.07 12312.39 48.10 0.00 0.00 82835.05 15049.01 62137.84 00:15:14.617 =================================================================================================================== 00:15:14.617 Total : 12312.39 48.10 0.00 0.00 82835.05 15049.01 62137.84 00:15:14.617 0 00:15:14.617 01:23:06 -- target/queue_depth.sh@39 -- # killprocess 624819 00:15:14.617 01:23:06 -- common/autotest_common.sh@926 -- # '[' -z 624819 ']' 00:15:14.617 01:23:06 -- common/autotest_common.sh@930 -- # kill -0 624819 00:15:14.617 01:23:06 -- common/autotest_common.sh@931 -- # uname 00:15:14.617 01:23:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.617 01:23:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 624819 00:15:14.617 01:23:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:14.617 01:23:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:14.617 01:23:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 624819' 00:15:14.617 killing process with pid 624819 00:15:14.617 01:23:06 -- common/autotest_common.sh@945 -- # kill 624819 00:15:14.617 Received shutdown signal, test time was about 10.000000 seconds 00:15:14.617 00:15:14.617 Latency(us) 00:15:14.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.617 =================================================================================================================== 00:15:14.617 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.617 01:23:06 -- common/autotest_common.sh@950 -- # wait 624819 00:15:14.876 01:23:06 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:14.876 01:23:06 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:14.876 01:23:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:14.876 01:23:06 -- nvmf/common.sh@116 -- # sync 00:15:14.876 01:23:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:14.876 01:23:06 -- nvmf/common.sh@119 -- # set +e 00:15:14.876 01:23:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:14.876 01:23:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:14.876 rmmod nvme_tcp 00:15:14.876 rmmod nvme_fabrics 00:15:14.876 rmmod nvme_keyring 00:15:15.134 01:23:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:15.134 01:23:06 -- nvmf/common.sh@123 -- # set -e 00:15:15.134 01:23:06 -- nvmf/common.sh@124 -- # return 0 00:15:15.134 01:23:06 -- nvmf/common.sh@477 -- # '[' -n 624662 ']' 00:15:15.134 01:23:06 -- nvmf/common.sh@478 -- # killprocess 624662 00:15:15.134 01:23:06 -- common/autotest_common.sh@926 -- # '[' -z 624662 ']' 00:15:15.134 01:23:06 -- common/autotest_common.sh@930 -- # kill -0 624662 00:15:15.134 01:23:06 -- common/autotest_common.sh@931 -- # uname 00:15:15.134 01:23:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:15.134 01:23:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 624662 00:15:15.134 01:23:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:15.134 01:23:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:15.134 01:23:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 624662' 00:15:15.134 killing process with pid 624662 00:15:15.134 01:23:06 -- common/autotest_common.sh@945 -- # kill 624662 00:15:15.134 01:23:06 -- common/autotest_common.sh@950 -- # wait 624662 00:15:15.392 01:23:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:15.392 01:23:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:15.392 01:23:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:15.392 01:23:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.392 01:23:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:15.392 01:23:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.392 01:23:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.392 01:23:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.298 01:23:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:17.298 00:15:17.298 real 0m17.288s 00:15:17.298 user 0m24.895s 00:15:17.298 sys 0m3.130s 00:15:17.298 01:23:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.298 01:23:09 -- common/autotest_common.sh@10 -- # set +x 00:15:17.298 ************************************ 00:15:17.298 END TEST nvmf_queue_depth 00:15:17.298 ************************************ 00:15:17.298 01:23:09 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:17.298 01:23:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:17.298 01:23:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.298 01:23:09 -- common/autotest_common.sh@10 -- # set +x 00:15:17.298 ************************************ 00:15:17.298 START TEST nvmf_multipath 00:15:17.298 ************************************ 00:15:17.298 01:23:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:17.556 * Looking for test storage... 00:15:17.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.556 01:23:09 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.556 01:23:09 -- nvmf/common.sh@7 -- # uname -s 00:15:17.556 01:23:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.556 01:23:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.556 01:23:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.556 01:23:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.556 01:23:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.556 01:23:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.556 01:23:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.557 01:23:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.557 01:23:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.557 01:23:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.557 01:23:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.557 01:23:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.557 01:23:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.557 01:23:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.557 01:23:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.557 01:23:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.557 01:23:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.557 01:23:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.557 01:23:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.557 01:23:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.557 01:23:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.557 01:23:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.557 01:23:09 -- paths/export.sh@5 -- # export PATH 00:15:17.557 01:23:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.557 01:23:09 -- nvmf/common.sh@46 -- # : 0 00:15:17.557 01:23:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:17.557 01:23:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:17.557 01:23:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:17.557 01:23:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.557 01:23:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.557 01:23:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:17.557 01:23:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:17.557 01:23:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:17.557 01:23:09 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.557 01:23:09 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.557 01:23:09 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:17.557 01:23:09 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.557 01:23:09 -- target/multipath.sh@43 -- # nvmftestinit 00:15:17.557 01:23:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:17.557 01:23:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.557 01:23:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:17.557 01:23:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:17.557 01:23:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:17.557 01:23:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.557 01:23:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.557 01:23:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.557 01:23:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:17.557 01:23:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:17.557 01:23:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:17.557 01:23:09 -- common/autotest_common.sh@10 -- # set +x 00:15:19.462 01:23:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:19.462 01:23:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:19.462 01:23:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:19.462 01:23:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:19.462 01:23:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:19.462 01:23:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:19.462 01:23:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:19.462 01:23:11 -- nvmf/common.sh@294 -- # net_devs=() 00:15:19.463 01:23:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:19.463 01:23:11 -- nvmf/common.sh@295 -- # e810=() 00:15:19.463 01:23:11 -- nvmf/common.sh@295 -- # local -ga e810 00:15:19.463 01:23:11 -- nvmf/common.sh@296 -- # x722=() 00:15:19.463 01:23:11 -- nvmf/common.sh@296 -- # local -ga x722 00:15:19.463 01:23:11 -- nvmf/common.sh@297 -- # mlx=() 00:15:19.463 01:23:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:19.463 01:23:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.463 01:23:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:19.463 01:23:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:19.463 01:23:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:19.463 01:23:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:19.463 01:23:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:19.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:19.463 01:23:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:19.463 01:23:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:19.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:19.463 01:23:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:19.463 01:23:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:19.463 01:23:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.463 01:23:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:19.463 01:23:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.463 01:23:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:19.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:19.463 01:23:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.463 01:23:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:19.463 01:23:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.463 01:23:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:19.463 01:23:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.463 01:23:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:19.463 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:19.463 01:23:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.463 01:23:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:19.463 01:23:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:19.463 01:23:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:19.463 01:23:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.463 01:23:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.463 01:23:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.463 01:23:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:19.463 01:23:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.463 01:23:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.463 01:23:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:19.463 01:23:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.463 01:23:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.463 01:23:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:19.463 01:23:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:19.463 01:23:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.463 01:23:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.463 01:23:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.463 01:23:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.463 01:23:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:19.463 01:23:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.463 01:23:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.463 01:23:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.463 01:23:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:19.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:15:19.463 00:15:19.463 --- 10.0.0.2 ping statistics --- 00:15:19.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.463 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:15:19.463 01:23:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:15:19.463 00:15:19.463 --- 10.0.0.1 ping statistics --- 00:15:19.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.463 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:19.463 01:23:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.463 01:23:11 -- nvmf/common.sh@410 -- # return 0 00:15:19.463 01:23:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:19.463 01:23:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.463 01:23:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:19.463 01:23:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.463 01:23:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:19.463 01:23:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:19.722 01:23:11 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:19.722 01:23:11 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:19.722 only one NIC for nvmf test 00:15:19.722 01:23:11 -- target/multipath.sh@47 -- # nvmftestfini 00:15:19.722 01:23:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:19.722 01:23:11 -- nvmf/common.sh@116 -- # sync 00:15:19.722 01:23:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:19.722 01:23:11 -- nvmf/common.sh@119 -- # set +e 00:15:19.722 01:23:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:19.722 01:23:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:19.722 rmmod nvme_tcp 00:15:19.722 rmmod nvme_fabrics 00:15:19.722 rmmod nvme_keyring 00:15:19.722 01:23:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:19.722 01:23:11 -- nvmf/common.sh@123 -- # set -e 00:15:19.722 01:23:11 -- nvmf/common.sh@124 -- # return 0 00:15:19.722 01:23:11 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:19.722 01:23:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:19.722 01:23:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:19.722 01:23:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:19.722 01:23:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.722 01:23:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:19.722 01:23:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.722 01:23:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.722 01:23:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.636 01:23:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:21.636 01:23:13 -- target/multipath.sh@48 -- # exit 0 00:15:21.636 01:23:13 -- target/multipath.sh@1 -- # nvmftestfini 00:15:21.636 01:23:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:21.636 01:23:13 -- nvmf/common.sh@116 -- # sync 00:15:21.636 01:23:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:21.636 01:23:13 -- nvmf/common.sh@119 -- # set +e 00:15:21.636 01:23:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:21.637 01:23:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:21.637 01:23:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:21.637 01:23:13 -- nvmf/common.sh@123 -- # set -e 00:15:21.637 01:23:13 -- nvmf/common.sh@124 -- # return 0 00:15:21.637 01:23:13 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:21.637 01:23:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:21.637 01:23:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:21.637 01:23:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:21.637 01:23:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.637 01:23:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:21.637 01:23:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.637 01:23:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.637 01:23:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.637 01:23:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:21.637 00:15:21.637 real 0m4.299s 00:15:21.637 user 0m0.805s 00:15:21.637 sys 0m1.483s 00:15:21.637 01:23:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.637 01:23:13 -- common/autotest_common.sh@10 -- # set +x 00:15:21.637 ************************************ 00:15:21.637 END TEST nvmf_multipath 00:15:21.637 ************************************ 00:15:21.637 01:23:13 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:21.637 01:23:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:21.637 01:23:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:21.637 01:23:13 -- common/autotest_common.sh@10 -- # set +x 00:15:21.637 ************************************ 00:15:21.637 START TEST nvmf_zcopy 00:15:21.637 ************************************ 00:15:21.637 01:23:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:21.903 * Looking for test storage... 00:15:21.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.903 01:23:13 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.903 01:23:13 -- nvmf/common.sh@7 -- # uname -s 00:15:21.903 01:23:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.903 01:23:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.903 01:23:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.903 01:23:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.903 01:23:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.903 01:23:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.903 01:23:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.903 01:23:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.903 01:23:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.903 01:23:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.903 01:23:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.903 01:23:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.903 01:23:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.903 01:23:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.903 01:23:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.903 01:23:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.903 01:23:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.903 01:23:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.903 01:23:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.903 01:23:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.903 01:23:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.903 01:23:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.903 01:23:13 -- paths/export.sh@5 -- # export PATH 00:15:21.903 01:23:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.903 01:23:13 -- nvmf/common.sh@46 -- # : 0 00:15:21.903 01:23:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:21.903 01:23:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:21.903 01:23:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:21.903 01:23:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.903 01:23:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.903 01:23:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:21.903 01:23:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:21.903 01:23:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:21.903 01:23:13 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:21.903 01:23:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:21.903 01:23:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.903 01:23:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:21.903 01:23:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:21.903 01:23:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:21.903 01:23:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.903 01:23:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.903 01:23:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.903 01:23:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:21.903 01:23:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:21.903 01:23:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:21.903 01:23:13 -- common/autotest_common.sh@10 -- # set +x 00:15:23.806 01:23:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:23.806 01:23:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:23.806 01:23:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:23.806 01:23:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:23.806 01:23:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:23.806 01:23:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:23.806 01:23:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:23.806 01:23:15 -- nvmf/common.sh@294 -- # net_devs=() 00:15:23.806 01:23:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:23.806 01:23:15 -- nvmf/common.sh@295 -- # e810=() 00:15:23.806 01:23:15 -- nvmf/common.sh@295 -- # local -ga e810 00:15:23.806 01:23:15 -- nvmf/common.sh@296 -- # x722=() 00:15:23.806 01:23:15 -- nvmf/common.sh@296 -- # local -ga x722 00:15:23.806 01:23:15 -- nvmf/common.sh@297 -- # mlx=() 00:15:23.806 01:23:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:23.806 01:23:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.806 01:23:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:23.806 01:23:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:23.806 01:23:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:23.806 01:23:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:23.806 01:23:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:23.806 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:23.806 01:23:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:23.806 01:23:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:23.806 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:23.806 01:23:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:23.806 01:23:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:23.806 01:23:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.806 01:23:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:23.806 01:23:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.806 01:23:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:23.806 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:23.806 01:23:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.806 01:23:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:23.806 01:23:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.806 01:23:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:23.806 01:23:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.806 01:23:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:23.806 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:23.806 01:23:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.806 01:23:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:23.806 01:23:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:23.806 01:23:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:23.806 01:23:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:23.806 01:23:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.806 01:23:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.806 01:23:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.806 01:23:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:23.806 01:23:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:23.806 01:23:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:23.806 01:23:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:23.806 01:23:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:23.806 01:23:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.806 01:23:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:23.806 01:23:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:23.806 01:23:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:23.806 01:23:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:23.806 01:23:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:23.806 01:23:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:23.806 01:23:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:23.806 01:23:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:23.806 01:23:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:23.806 01:23:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:23.806 01:23:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:23.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:15:23.806 00:15:23.806 --- 10.0.0.2 ping statistics --- 00:15:23.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.806 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:15:23.806 01:23:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:15:23.806 00:15:23.806 --- 10.0.0.1 ping statistics --- 00:15:23.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.806 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:15:23.806 01:23:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.807 01:23:15 -- nvmf/common.sh@410 -- # return 0 00:15:24.065 01:23:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:24.065 01:23:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.065 01:23:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:24.065 01:23:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:24.065 01:23:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.065 01:23:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:24.065 01:23:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:24.065 01:23:15 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:24.065 01:23:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:24.065 01:23:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:24.065 01:23:15 -- common/autotest_common.sh@10 -- # set +x 00:15:24.065 01:23:15 -- nvmf/common.sh@469 -- # nvmfpid=630059 00:15:24.065 01:23:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:24.065 01:23:15 -- nvmf/common.sh@470 -- # waitforlisten 630059 00:15:24.065 01:23:15 -- common/autotest_common.sh@819 -- # '[' -z 630059 ']' 00:15:24.065 01:23:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.065 01:23:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:24.065 01:23:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.065 01:23:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:24.065 01:23:15 -- common/autotest_common.sh@10 -- # set +x 00:15:24.065 [2024-07-27 01:23:15.634573] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:24.065 [2024-07-27 01:23:15.634647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.065 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.066 [2024-07-27 01:23:15.705346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.066 [2024-07-27 01:23:15.821512] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:24.066 [2024-07-27 01:23:15.821675] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.066 [2024-07-27 01:23:15.821694] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.066 [2024-07-27 01:23:15.821708] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.066 [2024-07-27 01:23:15.821736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.001 01:23:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:25.001 01:23:16 -- common/autotest_common.sh@852 -- # return 0 00:15:25.001 01:23:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:25.001 01:23:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:25.002 01:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:25.002 01:23:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.002 01:23:16 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:25.002 01:23:16 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:25.002 01:23:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.002 01:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:25.002 [2024-07-27 01:23:16.645297] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.002 01:23:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.002 01:23:16 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:25.002 01:23:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.002 01:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:25.002 01:23:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.002 01:23:16 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.002 01:23:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.002 01:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:25.002 [2024-07-27 01:23:16.661477] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.002 01:23:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.002 01:23:16 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.002 01:23:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.002 01:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:25.002 01:23:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.002 01:23:16 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:25.002 01:23:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.002 01:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:25.002 malloc0 00:15:25.002 01:23:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.002 01:23:16 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:25.002 01:23:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.002 01:23:16 -- common/autotest_common.sh@10 -- # set +x 00:15:25.002 01:23:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.002 01:23:16 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:25.002 01:23:16 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:25.002 01:23:16 -- nvmf/common.sh@520 -- # config=() 00:15:25.002 01:23:16 -- nvmf/common.sh@520 -- # local subsystem config 00:15:25.002 01:23:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:25.002 01:23:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:25.002 { 00:15:25.002 "params": { 00:15:25.002 "name": "Nvme$subsystem", 00:15:25.002 "trtype": "$TEST_TRANSPORT", 00:15:25.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.002 "adrfam": "ipv4", 00:15:25.002 "trsvcid": "$NVMF_PORT", 00:15:25.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.002 "hdgst": ${hdgst:-false}, 00:15:25.002 "ddgst": ${ddgst:-false} 00:15:25.002 }, 00:15:25.002 "method": "bdev_nvme_attach_controller" 00:15:25.002 } 00:15:25.002 EOF 00:15:25.002 )") 00:15:25.002 01:23:16 -- nvmf/common.sh@542 -- # cat 00:15:25.002 01:23:16 -- nvmf/common.sh@544 -- # jq . 00:15:25.002 01:23:16 -- nvmf/common.sh@545 -- # IFS=, 00:15:25.002 01:23:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:25.002 "params": { 00:15:25.002 "name": "Nvme1", 00:15:25.002 "trtype": "tcp", 00:15:25.002 "traddr": "10.0.0.2", 00:15:25.002 "adrfam": "ipv4", 00:15:25.002 "trsvcid": "4420", 00:15:25.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.002 "hdgst": false, 00:15:25.002 "ddgst": false 00:15:25.002 }, 00:15:25.002 "method": "bdev_nvme_attach_controller" 00:15:25.002 }' 00:15:25.002 [2024-07-27 01:23:16.740193] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:25.002 [2024-07-27 01:23:16.740273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630217 ] 00:15:25.261 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.261 [2024-07-27 01:23:16.809272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.261 [2024-07-27 01:23:16.928983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.520 Running I/O for 10 seconds... 00:15:37.728 00:15:37.728 Latency(us) 00:15:37.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.728 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:37.728 Verification LBA range: start 0x0 length 0x1000 00:15:37.728 Nvme1n1 : 10.01 8669.93 67.73 0.00 0.00 14728.42 1456.36 24078.41 00:15:37.728 =================================================================================================================== 00:15:37.728 Total : 8669.93 67.73 0.00 0.00 14728.42 1456.36 24078.41 00:15:37.728 01:23:27 -- target/zcopy.sh@39 -- # perfpid=631572 00:15:37.728 01:23:27 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:37.728 01:23:27 -- common/autotest_common.sh@10 -- # set +x 00:15:37.728 01:23:27 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:37.728 01:23:27 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:37.728 01:23:27 -- nvmf/common.sh@520 -- # config=() 00:15:37.728 01:23:27 -- nvmf/common.sh@520 -- # local subsystem config 00:15:37.728 01:23:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:37.728 01:23:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:37.728 { 00:15:37.728 "params": { 00:15:37.728 "name": "Nvme$subsystem", 00:15:37.728 "trtype": "$TEST_TRANSPORT", 00:15:37.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.728 "adrfam": "ipv4", 00:15:37.728 "trsvcid": "$NVMF_PORT", 00:15:37.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.728 "hdgst": ${hdgst:-false}, 00:15:37.728 "ddgst": ${ddgst:-false} 00:15:37.728 }, 00:15:37.728 "method": "bdev_nvme_attach_controller" 00:15:37.728 } 00:15:37.728 EOF 00:15:37.728 )") 00:15:37.728 [2024-07-27 01:23:27.553653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.728 [2024-07-27 01:23:27.553703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 01:23:27 -- nvmf/common.sh@542 -- # cat 00:15:37.729 01:23:27 -- nvmf/common.sh@544 -- # jq . 00:15:37.729 01:23:27 -- nvmf/common.sh@545 -- # IFS=, 00:15:37.729 01:23:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:37.729 "params": { 00:15:37.729 "name": "Nvme1", 00:15:37.729 "trtype": "tcp", 00:15:37.729 "traddr": "10.0.0.2", 00:15:37.729 "adrfam": "ipv4", 00:15:37.729 "trsvcid": "4420", 00:15:37.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.729 "hdgst": false, 00:15:37.729 "ddgst": false 00:15:37.729 }, 00:15:37.729 "method": "bdev_nvme_attach_controller" 00:15:37.729 }' 00:15:37.729 [2024-07-27 01:23:27.561596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.561622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.569615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.569641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.577628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.577649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.585647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.585667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.588347] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:37.729 [2024-07-27 01:23:27.588419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631572 ] 00:15:37.729 [2024-07-27 01:23:27.593666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.593686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.601691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.601717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.609711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.609731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.729 [2024-07-27 01:23:27.617733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.617752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.625773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.625796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.633795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.633819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.641819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.641843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.649841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.649865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.651845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.729 [2024-07-27 01:23:27.657884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.657916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.665922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.665962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.673909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.673934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.681929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.681954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.689951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.689975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.697974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.697997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.705996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.706021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.714017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.714041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.722083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.722133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.730077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.730116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.738108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.738129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.746125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.746155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.754145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.754165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.762157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.762177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.769964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.729 [2024-07-27 01:23:27.770178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.770200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.778196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.778216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.786244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.786275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.794274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.794312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.802303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.802357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.810360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.810397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.818376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.818434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.826392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.826446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.834376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.834401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.842436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.842474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.850464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.850508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.858471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.858510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.866469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.866494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.874491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.874519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.882526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.882554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.890545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.890572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.898568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.898595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.906588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.906615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.914610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.914634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.729 [2024-07-27 01:23:27.922631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.729 [2024-07-27 01:23:27.922655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:27.930657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:27.930681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:27.938681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:27.938705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:27.946711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:27.946738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:27.954733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:27.954760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:27.962757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:27.962783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:27.970774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:27.970798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:27.978810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:27.978838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:27.986828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:27.986853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 Running I/O for 5 seconds... 00:15:37.730 [2024-07-27 01:23:27.995103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:27.995145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.017197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.017230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.027000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.027031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.038893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.038923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.049762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.049792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.060775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.060806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.071655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.071685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.083365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.083394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.094501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.094531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.105265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.105295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.116586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.116615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.128192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.128222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.141428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.141459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.150085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.150112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.162765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.162792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.173812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.173839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.182672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.182703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.193855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.193881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.204999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.205029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.215815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.215842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.227742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.227770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.236801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.236827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.247335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.247362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.257901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.257928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.270186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.270213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.279262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.279299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.290403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.290430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.300505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.300532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.310808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.310835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.321208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.321236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.331514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.331541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.343630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.343657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.352253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.352280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.363556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.363584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.375919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.375948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.384704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.384732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.396413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.396442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.406515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.406543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.417166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.417193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.429024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.429069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.438211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.438238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.449345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.449373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.459773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.459800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.470214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.470242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.730 [2024-07-27 01:23:28.480554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.730 [2024-07-27 01:23:28.480588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.490957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.490985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.503126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.503154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.512210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.512238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.522903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.522930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.532830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.532857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.542899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.542927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.553011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.553038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.563160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.563187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.573714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.573741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.586343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.586381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.595651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.595677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.606623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.606649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.617033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.617068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.627398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.627426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.637387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.637414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.647460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.647488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.657317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.657344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.667365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.667392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.677800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.677837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.688281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.688308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.698642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.698669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.708656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.708682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.719037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.719073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.729794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.729821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.741960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.741988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.753496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.753523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.762404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.762431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.773732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.773758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.784347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.784374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.794630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.794657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.807466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.807492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.816895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.816922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.827457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.827484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.839425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.839452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.848081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.848107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.859171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.859198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.869371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.869398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.879538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.879574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.890128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.890155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.902768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.902810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.912021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.912048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.923137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.923164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.935596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.935623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.945140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.945167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.956096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.956123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.966214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.966241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.976321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.976348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.986389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.986415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:28.996485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:28.996527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:29.007049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:29.007087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:29.016328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:29.016354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:29.027578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:29.027622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.731 [2024-07-27 01:23:29.037721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.731 [2024-07-27 01:23:29.037748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.048278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.048305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.058231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.058258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.069121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.069148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.079472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.079500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.089997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.090024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.100516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.100543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.110958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.110985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.121417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.121459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.131525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.131552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.141881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.141908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.152478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.152506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.162818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.162847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.173601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.173628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.183979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.184006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.194859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.194885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.204733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.204761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.215564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.215592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.225421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.225448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.235570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.235597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.245662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.245690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.256176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.256203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.266893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.266920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.279104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.279131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.288190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.288217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.298947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.298974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.310946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.310973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.319808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.319835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.330402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.330429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.342573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.342600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.351856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.351883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.362526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.362552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.374444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.374486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.383299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.383326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.395691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.395718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.404194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.404221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.416685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.416712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.426411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.426438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.437026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.437052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.447248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.447275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.457884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.457911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.469772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.469798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.732 [2024-07-27 01:23:29.478621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.732 [2024-07-27 01:23:29.478648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.489653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.489696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.499597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.499624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.510731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.510760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.521500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.521530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.532075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.532103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.544448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.544489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.554168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.554196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.565078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.565105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.575913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.575940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.586042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.586109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.596393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.596431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.608855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.608882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.618587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.618623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.629411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.629453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.639515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.639541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.649964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.649991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.662482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.662509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.671656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.671683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.682660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.682687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.694079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.694123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.703056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.703114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.714388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.714415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.726769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.726796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.735669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.735697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.992 [2024-07-27 01:23:29.746861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.992 [2024-07-27 01:23:29.746889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.757196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.757223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.767652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.767679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.777729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.777756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.789046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.789083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.799862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.799889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.810414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.810441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.820597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.820624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.830742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.830769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.841000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.841027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.851349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.851376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.861531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.861558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.872078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.872113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.882284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.882311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.892710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.892736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.903012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.903039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.913208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.913235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.923727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.252 [2024-07-27 01:23:29.923753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.252 [2024-07-27 01:23:29.936293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.253 [2024-07-27 01:23:29.936320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.253 [2024-07-27 01:23:29.945355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.253 [2024-07-27 01:23:29.945382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.253 [2024-07-27 01:23:29.955882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.253 [2024-07-27 01:23:29.955909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.253 [2024-07-27 01:23:29.966069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.253 [2024-07-27 01:23:29.966095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.253 [2024-07-27 01:23:29.976357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.253 [2024-07-27 01:23:29.976384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.253 [2024-07-27 01:23:29.986643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.253 [2024-07-27 01:23:29.986670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.253 [2024-07-27 01:23:29.996960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.253 [2024-07-27 01:23:29.996987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.253 [2024-07-27 01:23:30.007217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.253 [2024-07-27 01:23:30.007244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.017638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.017671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.028740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.028768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.039767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.039795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.051237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.051266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.062696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.062723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.073691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.073729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.084520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.084548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.095350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.095378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.106496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.106524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.117726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.117754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.129139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.129168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.139516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.139543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.150298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.150326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.162927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.162956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.172817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.172846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.184207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.184235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.195080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.195117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.205690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.205718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.216727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.216754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.227600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.227628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.238346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.238375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.249151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.513 [2024-07-27 01:23:30.249180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.513 [2024-07-27 01:23:30.259903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.514 [2024-07-27 01:23:30.259931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.773 [2024-07-27 01:23:30.270736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.773 [2024-07-27 01:23:30.270765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.773 [2024-07-27 01:23:30.281640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.773 [2024-07-27 01:23:30.281677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.773 [2024-07-27 01:23:30.292810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.773 [2024-07-27 01:23:30.292837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.773 [2024-07-27 01:23:30.303617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.773 [2024-07-27 01:23:30.303645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.773 [2024-07-27 01:23:30.314301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.773 [2024-07-27 01:23:30.314329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.773 [2024-07-27 01:23:30.325186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.773 [2024-07-27 01:23:30.325214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.335612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.335641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.345943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.345973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.358843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.358871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.368333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.368361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.379709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.379737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.390643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.390670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.401412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.401440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.411779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.411808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.422043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.422081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.433299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.433327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.443945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.443976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.455082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.455110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.465660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.465688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.476429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.476457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.487020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.487056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.497940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.497968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.509002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.509031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.520480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.520508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.774 [2024-07-27 01:23:30.530881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.774 [2024-07-27 01:23:30.530910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.542157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.542186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.552731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.552759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.563705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.563733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.574636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.574664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.585141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.585170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.597834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.597862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.607611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.607638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.618900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.618929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.629173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.629202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.640142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.640171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.650834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.650865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.661862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.661891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.672757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.672786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.683658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.683687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.694532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.694560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.705221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.705249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.715928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.715956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.727178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.727206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.738621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.738649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.750138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.750169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.760501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.760533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.771276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.771304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.032 [2024-07-27 01:23:30.781966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.032 [2024-07-27 01:23:30.781994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.792862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.792891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.804091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.804120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.815481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.815509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.826703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.826731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.837452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.837480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.849190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.849218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.860134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.860161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.870775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.870803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.882043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.882082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.892702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.892730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.903591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.903620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.914559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.914587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.925239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.925267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.936030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.936066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.947189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.947218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.958299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.958327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.968715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.968744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.979600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.979628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:30.990402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:30.990431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:31.001787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:31.001816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:31.012481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:31.012509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:31.023141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:31.023170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:31.033511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:31.033554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.291 [2024-07-27 01:23:31.044206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.291 [2024-07-27 01:23:31.044234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.054972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.055000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.065816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.065845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.076611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.076639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.089303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.089332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.099591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.099619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.111146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.111174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.121739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.121767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.132291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.132319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.142875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.142903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.153624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.153652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.164308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.164336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.175296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.175325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.186656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.186686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.197457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.197485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.208289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.208318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.219263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.219291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.229896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.229924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.240701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.240729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.253586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.253614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.262882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.262910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.274361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.274390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.285141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.285169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.295916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.295944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.556 [2024-07-27 01:23:31.306784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.556 [2024-07-27 01:23:31.306812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.317146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.317176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.327832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.327860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.338602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.338631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.349613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.349641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.361168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.361196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.371452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.371480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.382242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.382270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.393254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.393283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.404217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.404245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.415007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.415036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.425682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.425710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.435631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.435659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.447035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.447084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.457884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.457912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.468682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.468710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.479711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.479739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.490325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.490353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.501429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.501456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.512211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.512247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.523236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.523264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.533693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.533720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.544755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.544783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.554794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.554823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.566409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.566436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.834 [2024-07-27 01:23:31.577537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.834 [2024-07-27 01:23:31.577565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.590018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.590047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.600408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.600437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.612187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.612217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.622538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.622566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.633095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.633123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.644298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.644326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.655315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.655344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.666051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.666088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.677241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.677269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.687961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.687988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.698911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.698939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.709197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.709237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.720414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.720451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.731155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.731183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.742499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.742527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.755202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.755231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.765078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.765107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.776409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.776439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.787674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.787702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.798884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.798929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.809820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.809849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.821286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.821315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.832247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.832276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.843113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.843142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.102 [2024-07-27 01:23:31.853880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.102 [2024-07-27 01:23:31.853908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.362 [2024-07-27 01:23:31.865251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.362 [2024-07-27 01:23:31.865280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.362 [2024-07-27 01:23:31.876427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.362 [2024-07-27 01:23:31.876456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.362 [2024-07-27 01:23:31.887438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.362 [2024-07-27 01:23:31.887482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.362 [2024-07-27 01:23:31.898187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.362 [2024-07-27 01:23:31.898216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.362 [2024-07-27 01:23:31.909388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.362 [2024-07-27 01:23:31.909417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.362 [2024-07-27 01:23:31.920082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.362 [2024-07-27 01:23:31.920122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.362 [2024-07-27 01:23:31.931386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:31.931421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:31.942667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:31.942696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:31.953699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:31.953728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:31.964677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:31.964705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:31.975452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:31.975481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:31.986203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:31.986232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:31.997257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:31.997285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.007614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.007646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.018269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.018297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.029453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.029482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.040494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.040522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.051487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.051533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.062869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.062897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.073477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.073505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.085862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.085890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.095897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.095925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.107446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.107474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.363 [2024-07-27 01:23:32.118213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.363 [2024-07-27 01:23:32.118242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.129022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.129052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.139575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.139611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.150414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.150442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.161399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.161427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.172629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.172658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.184008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.184037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.194615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.194644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.205880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.205911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.217219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.217247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.227918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.227946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.238794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.238821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.249412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.249440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.260289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.260317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.270944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.270972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.281862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.281889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.292166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.292194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.302979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.303006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.313896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.313925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.334790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.334820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.345588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.345616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.356473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.356502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.367196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.367224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.622 [2024-07-27 01:23:32.378089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.622 [2024-07-27 01:23:32.378117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.388677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.388707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.399694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.399722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.410821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.410849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.421226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.421254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.431535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.431564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.442355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.442384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.453700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.453730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.464669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.464698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.475829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.475857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.486476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.486504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.497229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.497258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.508133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.508161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.519521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.519549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.530801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.530830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.542013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.542041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.553178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.553207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.563761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.563789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.574453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.574481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.586979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.587007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.598485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.598513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.607729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.607757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.619622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.619650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.880 [2024-07-27 01:23:32.630576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.880 [2024-07-27 01:23:32.630604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.641484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.641513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.652213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.652242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.663132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.663160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.673583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.673611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.683637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.683665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.694663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.694707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.705502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.705530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.715910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.715939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.726733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.726761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.139 [2024-07-27 01:23:32.737383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.139 [2024-07-27 01:23:32.737427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.748422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.748450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.759409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.759437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.770357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.770385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.781143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.781171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.792257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.792286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.803321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.803349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.813920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.813948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.824212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.824243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.835112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.835141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.846071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.846098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.856833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.856861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.867647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.867691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.878195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.878224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.140 [2024-07-27 01:23:32.889046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.140 [2024-07-27 01:23:32.889085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.900047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.900101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.910674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.910704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.921920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.921966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.932678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.932707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.943820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.943852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.954582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.954611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.965372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.965408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.976318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.976348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.987015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.987044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:32.998235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:32.998264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.008072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.008111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 00:15:41.398 Latency(us) 00:15:41.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.398 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:41.398 Nvme1n1 : 5.01 11907.35 93.03 0.00 0.00 10735.83 4636.07 21748.24 00:15:41.398 =================================================================================================================== 00:15:41.398 Total : 11907.35 93.03 0.00 0.00 10735.83 4636.07 21748.24 00:15:41.398 [2024-07-27 01:23:33.012392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.012423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.020403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.020434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.028421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.028458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.036475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.036524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.044516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.044572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.052532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.052589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.060549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.060602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.068572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.068626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.076611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.076670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.084617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.084672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.092650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.092707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.100667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.100744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.108692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.108749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.116714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.116772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.124730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.124786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.132743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.132795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.140766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.140820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.398 [2024-07-27 01:23:33.148749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.398 [2024-07-27 01:23:33.148783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.156765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.156796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.164784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.164814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.172804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.172835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.180814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.180841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.188901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.188955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.196926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.196983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.204914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.204957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.212916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.212947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.220933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.220963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.228960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.228990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.236970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.236996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.245073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.245137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.253083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.253154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.261079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.261135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.269080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.269122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 [2024-07-27 01:23:33.277112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.659 [2024-07-27 01:23:33.277138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (631572) - No such process 00:15:41.659 01:23:33 -- target/zcopy.sh@49 -- # wait 631572 00:15:41.659 01:23:33 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.659 01:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.659 01:23:33 -- common/autotest_common.sh@10 -- # set +x 00:15:41.659 01:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.659 01:23:33 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:41.659 01:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.659 01:23:33 -- common/autotest_common.sh@10 -- # set +x 00:15:41.659 delay0 00:15:41.659 01:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.659 01:23:33 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:41.659 01:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.659 01:23:33 -- common/autotest_common.sh@10 -- # set +x 00:15:41.659 01:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.659 01:23:33 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:41.659 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.659 [2024-07-27 01:23:33.396565] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:48.232 Initializing NVMe Controllers 00:15:48.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:48.232 Initialization complete. Launching workers. 00:15:48.232 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 91 00:15:48.232 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 375, failed to submit 36 00:15:48.232 success 191, unsuccess 184, failed 0 00:15:48.232 01:23:39 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:48.232 01:23:39 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:48.232 01:23:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:48.232 01:23:39 -- nvmf/common.sh@116 -- # sync 00:15:48.232 01:23:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:48.232 01:23:39 -- nvmf/common.sh@119 -- # set +e 00:15:48.232 01:23:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:48.232 01:23:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:48.232 rmmod nvme_tcp 00:15:48.232 rmmod nvme_fabrics 00:15:48.232 rmmod nvme_keyring 00:15:48.232 01:23:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:48.232 01:23:39 -- nvmf/common.sh@123 -- # set -e 00:15:48.232 01:23:39 -- nvmf/common.sh@124 -- # return 0 00:15:48.232 01:23:39 -- nvmf/common.sh@477 -- # '[' -n 630059 ']' 00:15:48.232 01:23:39 -- nvmf/common.sh@478 -- # killprocess 630059 00:15:48.232 01:23:39 -- common/autotest_common.sh@926 -- # '[' -z 630059 ']' 00:15:48.232 01:23:39 -- common/autotest_common.sh@930 -- # kill -0 630059 00:15:48.232 01:23:39 -- common/autotest_common.sh@931 -- # uname 00:15:48.232 01:23:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:48.232 01:23:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 630059 00:15:48.232 01:23:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:48.232 01:23:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:48.232 01:23:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 630059' 00:15:48.232 killing process with pid 630059 00:15:48.232 01:23:39 -- common/autotest_common.sh@945 -- # kill 630059 00:15:48.232 01:23:39 -- common/autotest_common.sh@950 -- # wait 630059 00:15:48.232 01:23:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:48.232 01:23:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:48.232 01:23:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:48.232 01:23:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.232 01:23:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:48.232 01:23:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.232 01:23:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.232 01:23:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.775 01:23:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:50.775 00:15:50.775 real 0m28.657s 00:15:50.775 user 0m42.277s 00:15:50.775 sys 0m8.279s 00:15:50.775 01:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.775 01:23:42 -- common/autotest_common.sh@10 -- # set +x 00:15:50.775 ************************************ 00:15:50.775 END TEST nvmf_zcopy 00:15:50.775 ************************************ 00:15:50.775 01:23:42 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:50.775 01:23:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:50.775 01:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:50.775 01:23:42 -- common/autotest_common.sh@10 -- # set +x 00:15:50.775 ************************************ 00:15:50.775 START TEST nvmf_nmic 00:15:50.775 ************************************ 00:15:50.775 01:23:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:50.775 * Looking for test storage... 00:15:50.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.775 01:23:42 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.775 01:23:42 -- nvmf/common.sh@7 -- # uname -s 00:15:50.775 01:23:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.775 01:23:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.775 01:23:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.775 01:23:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.775 01:23:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.775 01:23:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.775 01:23:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.775 01:23:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.775 01:23:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.775 01:23:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.775 01:23:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.775 01:23:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.775 01:23:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.775 01:23:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.775 01:23:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.775 01:23:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.775 01:23:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.775 01:23:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.775 01:23:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.775 01:23:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.775 01:23:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.775 01:23:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.775 01:23:42 -- paths/export.sh@5 -- # export PATH 00:15:50.775 01:23:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.775 01:23:42 -- nvmf/common.sh@46 -- # : 0 00:15:50.775 01:23:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:50.775 01:23:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:50.775 01:23:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:50.775 01:23:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.775 01:23:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.775 01:23:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:50.775 01:23:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:50.775 01:23:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:50.775 01:23:42 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.775 01:23:42 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.775 01:23:42 -- target/nmic.sh@14 -- # nvmftestinit 00:15:50.775 01:23:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:50.775 01:23:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.775 01:23:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:50.775 01:23:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:50.775 01:23:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:50.775 01:23:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.775 01:23:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.775 01:23:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.775 01:23:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:50.775 01:23:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:50.775 01:23:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:50.776 01:23:42 -- common/autotest_common.sh@10 -- # set +x 00:15:52.680 01:23:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:52.680 01:23:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:52.680 01:23:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:52.680 01:23:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:52.680 01:23:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:52.680 01:23:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:52.680 01:23:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:52.680 01:23:44 -- nvmf/common.sh@294 -- # net_devs=() 00:15:52.680 01:23:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:52.680 01:23:44 -- nvmf/common.sh@295 -- # e810=() 00:15:52.680 01:23:44 -- nvmf/common.sh@295 -- # local -ga e810 00:15:52.680 01:23:44 -- nvmf/common.sh@296 -- # x722=() 00:15:52.680 01:23:44 -- nvmf/common.sh@296 -- # local -ga x722 00:15:52.680 01:23:44 -- nvmf/common.sh@297 -- # mlx=() 00:15:52.680 01:23:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:52.680 01:23:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.680 01:23:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:52.680 01:23:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:52.680 01:23:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:52.680 01:23:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:52.680 01:23:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:52.680 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:52.680 01:23:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:52.680 01:23:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:52.680 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:52.680 01:23:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:52.680 01:23:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:52.680 01:23:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.680 01:23:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:52.680 01:23:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.680 01:23:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:52.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:52.680 01:23:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.680 01:23:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:52.680 01:23:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.680 01:23:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:52.680 01:23:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.680 01:23:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:52.680 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:52.680 01:23:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.680 01:23:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:52.680 01:23:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:52.680 01:23:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:52.680 01:23:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:52.680 01:23:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.680 01:23:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.680 01:23:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.680 01:23:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:52.680 01:23:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.680 01:23:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.680 01:23:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:52.680 01:23:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.680 01:23:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.680 01:23:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:52.680 01:23:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:52.680 01:23:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.680 01:23:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.680 01:23:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.680 01:23:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.680 01:23:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:52.680 01:23:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.680 01:23:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.680 01:23:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.680 01:23:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:52.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:15:52.680 00:15:52.680 --- 10.0.0.2 ping statistics --- 00:15:52.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.680 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:15:52.680 01:23:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:15:52.681 00:15:52.681 --- 10.0.0.1 ping statistics --- 00:15:52.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.681 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:52.681 01:23:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.681 01:23:44 -- nvmf/common.sh@410 -- # return 0 00:15:52.681 01:23:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:52.681 01:23:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.681 01:23:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:52.681 01:23:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:52.681 01:23:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.681 01:23:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:52.681 01:23:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:52.681 01:23:44 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:52.681 01:23:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:52.681 01:23:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:52.681 01:23:44 -- common/autotest_common.sh@10 -- # set +x 00:15:52.681 01:23:44 -- nvmf/common.sh@469 -- # nvmfpid=634891 00:15:52.681 01:23:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.681 01:23:44 -- nvmf/common.sh@470 -- # waitforlisten 634891 00:15:52.681 01:23:44 -- common/autotest_common.sh@819 -- # '[' -z 634891 ']' 00:15:52.681 01:23:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.681 01:23:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:52.681 01:23:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.681 01:23:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:52.681 01:23:44 -- common/autotest_common.sh@10 -- # set +x 00:15:52.681 [2024-07-27 01:23:44.278227] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:52.681 [2024-07-27 01:23:44.278309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.681 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.681 [2024-07-27 01:23:44.349020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.938 [2024-07-27 01:23:44.468926] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:52.938 [2024-07-27 01:23:44.469094] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.938 [2024-07-27 01:23:44.469115] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.938 [2024-07-27 01:23:44.469130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.938 [2024-07-27 01:23:44.469187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.938 [2024-07-27 01:23:44.469255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.938 [2024-07-27 01:23:44.469310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.938 [2024-07-27 01:23:44.469314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.507 01:23:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:53.507 01:23:45 -- common/autotest_common.sh@852 -- # return 0 00:15:53.507 01:23:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:53.507 01:23:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:53.507 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.507 01:23:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.507 01:23:45 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.507 01:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.507 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.507 [2024-07-27 01:23:45.245556] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.507 01:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.507 01:23:45 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:53.507 01:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.507 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.768 Malloc0 00:15:53.768 01:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.768 01:23:45 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:53.768 01:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.768 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.768 01:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.768 01:23:45 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:53.768 01:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.768 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.768 01:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.768 01:23:45 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.768 01:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.768 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.768 [2024-07-27 01:23:45.299186] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.768 01:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.768 01:23:45 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:53.768 test case1: single bdev can't be used in multiple subsystems 00:15:53.768 01:23:45 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:53.768 01:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.768 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.768 01:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.768 01:23:45 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:53.768 01:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.768 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.768 01:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.768 01:23:45 -- target/nmic.sh@28 -- # nmic_status=0 00:15:53.768 01:23:45 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:53.768 01:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.768 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.768 [2024-07-27 01:23:45.323004] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:53.768 [2024-07-27 01:23:45.323033] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:53.768 [2024-07-27 01:23:45.323071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.768 request: 00:15:53.768 { 00:15:53.768 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:53.768 "namespace": { 00:15:53.768 "bdev_name": "Malloc0" 00:15:53.768 }, 00:15:53.768 "method": "nvmf_subsystem_add_ns", 00:15:53.768 "req_id": 1 00:15:53.768 } 00:15:53.768 Got JSON-RPC error response 00:15:53.768 response: 00:15:53.768 { 00:15:53.768 "code": -32602, 00:15:53.768 "message": "Invalid parameters" 00:15:53.768 } 00:15:53.768 01:23:45 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:53.768 01:23:45 -- target/nmic.sh@29 -- # nmic_status=1 00:15:53.768 01:23:45 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:53.768 01:23:45 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:53.768 Adding namespace failed - expected result. 00:15:53.768 01:23:45 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:53.768 test case2: host connect to nvmf target in multiple paths 00:15:53.768 01:23:45 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:53.768 01:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.768 01:23:45 -- common/autotest_common.sh@10 -- # set +x 00:15:53.768 [2024-07-27 01:23:45.331148] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:53.768 01:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.768 01:23:45 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:54.334 01:23:46 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:55.273 01:23:46 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:55.273 01:23:46 -- common/autotest_common.sh@1177 -- # local i=0 00:15:55.273 01:23:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.273 01:23:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:55.273 01:23:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:57.178 01:23:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:57.178 01:23:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:57.178 01:23:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:57.178 01:23:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:57.178 01:23:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.178 01:23:48 -- common/autotest_common.sh@1187 -- # return 0 00:15:57.178 01:23:48 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:57.178 [global] 00:15:57.178 thread=1 00:15:57.178 invalidate=1 00:15:57.178 rw=write 00:15:57.178 time_based=1 00:15:57.178 runtime=1 00:15:57.178 ioengine=libaio 00:15:57.178 direct=1 00:15:57.178 bs=4096 00:15:57.178 iodepth=1 00:15:57.178 norandommap=0 00:15:57.178 numjobs=1 00:15:57.178 00:15:57.178 verify_dump=1 00:15:57.178 verify_backlog=512 00:15:57.178 verify_state_save=0 00:15:57.178 do_verify=1 00:15:57.178 verify=crc32c-intel 00:15:57.178 [job0] 00:15:57.178 filename=/dev/nvme0n1 00:15:57.178 Could not set queue depth (nvme0n1) 00:15:57.178 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.178 fio-3.35 00:15:57.178 Starting 1 thread 00:15:58.557 00:15:58.557 job0: (groupid=0, jobs=1): err= 0: pid=635668: Sat Jul 27 01:23:50 2024 00:15:58.557 read: IOPS=408, BW=1634KiB/s (1673kB/s)(1676KiB/1026msec) 00:15:58.557 slat (nsec): min=5032, max=51747, avg=18833.68, stdev=10753.14 00:15:58.557 clat (usec): min=369, max=46004, avg=2064.24, stdev=7991.87 00:15:58.557 lat (usec): min=381, max=46024, avg=2083.08, stdev=7993.74 00:15:58.557 clat percentiles (usec): 00:15:58.557 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 441], 00:15:58.557 | 30.00th=[ 449], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 482], 00:15:58.557 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 570], 95.00th=[ 627], 00:15:58.557 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:15:58.557 | 99.99th=[45876] 00:15:58.557 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:15:58.557 slat (nsec): min=6095, max=49104, avg=18143.35, stdev=8193.80 00:15:58.557 clat (usec): min=213, max=480, avg=269.84, stdev=48.45 00:15:58.557 lat (usec): min=222, max=522, avg=287.98, stdev=52.91 00:15:58.557 clat percentiles (usec): 00:15:58.557 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:15:58.557 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:15:58.557 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 334], 95.00th=[ 383], 00:15:58.557 | 99.00th=[ 420], 99.50th=[ 457], 99.90th=[ 482], 99.95th=[ 482], 00:15:58.557 | 99.99th=[ 482] 00:15:58.557 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:58.557 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:58.557 lat (usec) : 250=27.07%, 500=62.94%, 750=8.27% 00:15:58.557 lat (msec) : 50=1.72% 00:15:58.557 cpu : usr=0.78%, sys=1.85%, ctx=931, majf=0, minf=2 00:15:58.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.557 issued rwts: total=419,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.557 00:15:58.557 Run status group 0 (all jobs): 00:15:58.557 READ: bw=1634KiB/s (1673kB/s), 1634KiB/s-1634KiB/s (1673kB/s-1673kB/s), io=1676KiB (1716kB), run=1026-1026msec 00:15:58.557 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:15:58.557 00:15:58.557 Disk stats (read/write): 00:15:58.557 nvme0n1: ios=465/512, merge=0/0, ticks=726/138, in_queue=864, util=92.18% 00:15:58.557 01:23:50 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:58.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:58.557 01:23:50 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:58.557 01:23:50 -- common/autotest_common.sh@1198 -- # local i=0 00:15:58.557 01:23:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:58.557 01:23:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:58.557 01:23:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:58.557 01:23:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:58.557 01:23:50 -- common/autotest_common.sh@1210 -- # return 0 00:15:58.557 01:23:50 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:58.557 01:23:50 -- target/nmic.sh@53 -- # nvmftestfini 00:15:58.557 01:23:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:58.557 01:23:50 -- nvmf/common.sh@116 -- # sync 00:15:58.557 01:23:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:58.557 01:23:50 -- nvmf/common.sh@119 -- # set +e 00:15:58.557 01:23:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:58.557 01:23:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:58.557 rmmod nvme_tcp 00:15:58.557 rmmod nvme_fabrics 00:15:58.557 rmmod nvme_keyring 00:15:58.815 01:23:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:58.815 01:23:50 -- nvmf/common.sh@123 -- # set -e 00:15:58.815 01:23:50 -- nvmf/common.sh@124 -- # return 0 00:15:58.815 01:23:50 -- nvmf/common.sh@477 -- # '[' -n 634891 ']' 00:15:58.815 01:23:50 -- nvmf/common.sh@478 -- # killprocess 634891 00:15:58.815 01:23:50 -- common/autotest_common.sh@926 -- # '[' -z 634891 ']' 00:15:58.815 01:23:50 -- common/autotest_common.sh@930 -- # kill -0 634891 00:15:58.815 01:23:50 -- common/autotest_common.sh@931 -- # uname 00:15:58.815 01:23:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:58.815 01:23:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 634891 00:15:58.815 01:23:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:58.815 01:23:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:58.815 01:23:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 634891' 00:15:58.815 killing process with pid 634891 00:15:58.815 01:23:50 -- common/autotest_common.sh@945 -- # kill 634891 00:15:58.815 01:23:50 -- common/autotest_common.sh@950 -- # wait 634891 00:15:59.073 01:23:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:59.073 01:23:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:59.073 01:23:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:59.073 01:23:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.073 01:23:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:59.073 01:23:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.073 01:23:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.073 01:23:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.010 01:23:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:01.010 00:16:01.010 real 0m10.630s 00:16:01.010 user 0m25.458s 00:16:01.010 sys 0m2.289s 00:16:01.010 01:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.011 01:23:52 -- common/autotest_common.sh@10 -- # set +x 00:16:01.011 ************************************ 00:16:01.011 END TEST nvmf_nmic 00:16:01.011 ************************************ 00:16:01.011 01:23:52 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:01.011 01:23:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:01.011 01:23:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:01.011 01:23:52 -- common/autotest_common.sh@10 -- # set +x 00:16:01.011 ************************************ 00:16:01.011 START TEST nvmf_fio_target 00:16:01.011 ************************************ 00:16:01.011 01:23:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:01.011 * Looking for test storage... 00:16:01.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.011 01:23:52 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.011 01:23:52 -- nvmf/common.sh@7 -- # uname -s 00:16:01.011 01:23:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.011 01:23:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.011 01:23:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.011 01:23:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.011 01:23:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.011 01:23:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.011 01:23:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.011 01:23:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.011 01:23:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.011 01:23:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.011 01:23:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.011 01:23:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.011 01:23:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.011 01:23:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.011 01:23:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.011 01:23:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.011 01:23:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.011 01:23:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.011 01:23:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.011 01:23:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.011 01:23:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.011 01:23:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.011 01:23:52 -- paths/export.sh@5 -- # export PATH 00:16:01.011 01:23:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.011 01:23:52 -- nvmf/common.sh@46 -- # : 0 00:16:01.011 01:23:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:01.011 01:23:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:01.011 01:23:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:01.011 01:23:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.011 01:23:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.011 01:23:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:01.011 01:23:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:01.011 01:23:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:01.270 01:23:52 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.270 01:23:52 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:01.270 01:23:52 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:01.270 01:23:52 -- target/fio.sh@16 -- # nvmftestinit 00:16:01.270 01:23:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:01.270 01:23:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.270 01:23:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:01.270 01:23:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:01.270 01:23:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:01.270 01:23:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.270 01:23:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.270 01:23:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.270 01:23:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:01.270 01:23:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:01.270 01:23:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:01.270 01:23:52 -- common/autotest_common.sh@10 -- # set +x 00:16:03.172 01:23:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:03.172 01:23:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:03.172 01:23:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:03.172 01:23:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:03.172 01:23:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:03.172 01:23:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:03.172 01:23:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:03.172 01:23:54 -- nvmf/common.sh@294 -- # net_devs=() 00:16:03.172 01:23:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:03.172 01:23:54 -- nvmf/common.sh@295 -- # e810=() 00:16:03.172 01:23:54 -- nvmf/common.sh@295 -- # local -ga e810 00:16:03.172 01:23:54 -- nvmf/common.sh@296 -- # x722=() 00:16:03.172 01:23:54 -- nvmf/common.sh@296 -- # local -ga x722 00:16:03.172 01:23:54 -- nvmf/common.sh@297 -- # mlx=() 00:16:03.172 01:23:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:03.172 01:23:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.172 01:23:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:03.172 01:23:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:03.172 01:23:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:03.172 01:23:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:03.172 01:23:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:03.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:03.172 01:23:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:03.172 01:23:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:03.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:03.172 01:23:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:03.172 01:23:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:03.172 01:23:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.172 01:23:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:03.172 01:23:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.172 01:23:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:03.172 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:03.172 01:23:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.172 01:23:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:03.172 01:23:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.172 01:23:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:03.172 01:23:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.172 01:23:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:03.172 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:03.172 01:23:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.172 01:23:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:03.172 01:23:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:03.172 01:23:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:03.172 01:23:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.172 01:23:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.172 01:23:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.172 01:23:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:03.172 01:23:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.172 01:23:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.172 01:23:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:03.172 01:23:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.172 01:23:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.172 01:23:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:03.172 01:23:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:03.172 01:23:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.172 01:23:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.172 01:23:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.172 01:23:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.172 01:23:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:03.172 01:23:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.172 01:23:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.172 01:23:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.172 01:23:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:03.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:16:03.172 00:16:03.172 --- 10.0.0.2 ping statistics --- 00:16:03.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.172 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:16:03.172 01:23:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:16:03.172 00:16:03.172 --- 10.0.0.1 ping statistics --- 00:16:03.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.172 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:16:03.172 01:23:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.172 01:23:54 -- nvmf/common.sh@410 -- # return 0 00:16:03.172 01:23:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:03.172 01:23:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.172 01:23:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:03.172 01:23:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.172 01:23:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:03.172 01:23:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:03.172 01:23:54 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:03.172 01:23:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:03.172 01:23:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:03.172 01:23:54 -- common/autotest_common.sh@10 -- # set +x 00:16:03.172 01:23:54 -- nvmf/common.sh@469 -- # nvmfpid=637754 00:16:03.173 01:23:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.173 01:23:54 -- nvmf/common.sh@470 -- # waitforlisten 637754 00:16:03.173 01:23:54 -- common/autotest_common.sh@819 -- # '[' -z 637754 ']' 00:16:03.173 01:23:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.173 01:23:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.173 01:23:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.173 01:23:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.173 01:23:54 -- common/autotest_common.sh@10 -- # set +x 00:16:03.173 [2024-07-27 01:23:54.878724] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:03.173 [2024-07-27 01:23:54.878799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.173 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.430 [2024-07-27 01:23:54.948868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.430 [2024-07-27 01:23:55.071909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:03.430 [2024-07-27 01:23:55.072096] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.430 [2024-07-27 01:23:55.072118] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.430 [2024-07-27 01:23:55.072134] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.430 [2024-07-27 01:23:55.072194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.430 [2024-07-27 01:23:55.072246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.430 [2024-07-27 01:23:55.072298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.430 [2024-07-27 01:23:55.072301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.366 01:23:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:04.366 01:23:55 -- common/autotest_common.sh@852 -- # return 0 00:16:04.366 01:23:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:04.366 01:23:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:04.366 01:23:55 -- common/autotest_common.sh@10 -- # set +x 00:16:04.366 01:23:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.366 01:23:55 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:04.366 [2024-07-27 01:23:56.121531] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.626 01:23:56 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.885 01:23:56 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:04.885 01:23:56 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.144 01:23:56 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:05.144 01:23:56 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.404 01:23:56 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:05.404 01:23:56 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.663 01:23:57 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:05.663 01:23:57 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:05.663 01:23:57 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.921 01:23:57 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:05.921 01:23:57 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.180 01:23:57 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:06.180 01:23:57 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.438 01:23:58 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:06.438 01:23:58 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:06.697 01:23:58 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:06.955 01:23:58 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:06.955 01:23:58 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:07.213 01:23:58 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:07.213 01:23:58 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.471 01:23:59 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.729 [2024-07-27 01:23:59.356392] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.729 01:23:59 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:07.987 01:23:59 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:08.246 01:23:59 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.811 01:24:00 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:08.811 01:24:00 -- common/autotest_common.sh@1177 -- # local i=0 00:16:08.811 01:24:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.811 01:24:00 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:16:08.811 01:24:00 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:16:08.811 01:24:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:10.714 01:24:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:10.714 01:24:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:10.714 01:24:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.714 01:24:02 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:10.714 01:24:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.714 01:24:02 -- common/autotest_common.sh@1187 -- # return 0 00:16:10.714 01:24:02 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:10.714 [global] 00:16:10.714 thread=1 00:16:10.714 invalidate=1 00:16:10.714 rw=write 00:16:10.714 time_based=1 00:16:10.714 runtime=1 00:16:10.714 ioengine=libaio 00:16:10.714 direct=1 00:16:10.714 bs=4096 00:16:10.714 iodepth=1 00:16:10.714 norandommap=0 00:16:10.714 numjobs=1 00:16:10.714 00:16:10.714 verify_dump=1 00:16:10.714 verify_backlog=512 00:16:10.714 verify_state_save=0 00:16:10.714 do_verify=1 00:16:10.714 verify=crc32c-intel 00:16:10.714 [job0] 00:16:10.714 filename=/dev/nvme0n1 00:16:10.714 [job1] 00:16:10.714 filename=/dev/nvme0n2 00:16:10.714 [job2] 00:16:10.714 filename=/dev/nvme0n3 00:16:10.714 [job3] 00:16:10.714 filename=/dev/nvme0n4 00:16:10.972 Could not set queue depth (nvme0n1) 00:16:10.972 Could not set queue depth (nvme0n2) 00:16:10.972 Could not set queue depth (nvme0n3) 00:16:10.972 Could not set queue depth (nvme0n4) 00:16:10.972 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.972 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.972 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.972 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.972 fio-3.35 00:16:10.972 Starting 4 threads 00:16:12.346 00:16:12.346 job0: (groupid=0, jobs=1): err= 0: pid=638984: Sat Jul 27 01:24:03 2024 00:16:12.347 read: IOPS=176, BW=707KiB/s (724kB/s)(708KiB/1001msec) 00:16:12.347 slat (nsec): min=5385, max=50540, avg=19685.94, stdev=10572.20 00:16:12.347 clat (usec): min=295, max=41585, avg=4753.43, stdev=12604.19 00:16:12.347 lat (usec): min=310, max=41604, avg=4773.12, stdev=12604.86 00:16:12.347 clat percentiles (usec): 00:16:12.347 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 343], 00:16:12.347 | 30.00th=[ 355], 40.00th=[ 371], 50.00th=[ 392], 60.00th=[ 400], 00:16:12.347 | 70.00th=[ 437], 80.00th=[ 498], 90.00th=[40633], 95.00th=[41157], 00:16:12.347 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:12.347 | 99.99th=[41681] 00:16:12.347 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:12.347 slat (nsec): min=6693, max=64661, avg=18994.25, stdev=9803.24 00:16:12.347 clat (usec): min=209, max=431, avg=277.80, stdev=50.02 00:16:12.347 lat (usec): min=218, max=459, avg=296.79, stdev=51.27 00:16:12.347 clat percentiles (usec): 00:16:12.347 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 243], 00:16:12.347 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:16:12.347 | 70.00th=[ 281], 80.00th=[ 314], 90.00th=[ 371], 95.00th=[ 392], 00:16:12.347 | 99.00th=[ 416], 99.50th=[ 420], 99.90th=[ 433], 99.95th=[ 433], 00:16:12.347 | 99.99th=[ 433] 00:16:12.347 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:16:12.347 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:12.347 lat (usec) : 250=24.38%, 500=70.68%, 750=2.03%, 1000=0.15% 00:16:12.347 lat (msec) : 50=2.76% 00:16:12.347 cpu : usr=1.30%, sys=0.70%, ctx=689, majf=0, minf=1 00:16:12.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.347 issued rwts: total=177,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.347 job1: (groupid=0, jobs=1): err= 0: pid=638985: Sat Jul 27 01:24:03 2024 00:16:12.347 read: IOPS=1354, BW=5419KiB/s (5549kB/s)(5424KiB/1001msec) 00:16:12.347 slat (nsec): min=5906, max=51407, avg=13296.78, stdev=5413.17 00:16:12.347 clat (usec): min=316, max=748, avg=421.56, stdev=30.04 00:16:12.347 lat (usec): min=324, max=765, avg=434.85, stdev=32.73 00:16:12.347 clat percentiles (usec): 00:16:12.347 | 1.00th=[ 375], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 396], 00:16:12.347 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 424], 00:16:12.347 | 70.00th=[ 433], 80.00th=[ 441], 90.00th=[ 465], 95.00th=[ 474], 00:16:12.347 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 750], 00:16:12.347 | 99.99th=[ 750] 00:16:12.347 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:12.347 slat (nsec): min=7342, max=56913, avg=17495.24, stdev=6516.29 00:16:12.347 clat (usec): min=191, max=530, avg=241.36, stdev=32.16 00:16:12.347 lat (usec): min=202, max=556, avg=258.85, stdev=36.15 00:16:12.347 clat percentiles (usec): 00:16:12.347 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:16:12.347 | 30.00th=[ 221], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:16:12.347 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 293], 00:16:12.347 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 529], 99.95th=[ 529], 00:16:12.347 | 99.99th=[ 529] 00:16:12.347 bw ( KiB/s): min= 7760, max= 7760, per=65.17%, avg=7760.00, stdev= 0.00, samples=1 00:16:12.347 iops : min= 1940, max= 1940, avg=1940.00, stdev= 0.00, samples=1 00:16:12.347 lat (usec) : 250=35.65%, 500=63.90%, 750=0.45% 00:16:12.347 cpu : usr=3.70%, sys=5.90%, ctx=2892, majf=0, minf=1 00:16:12.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.347 issued rwts: total=1356,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.347 job2: (groupid=0, jobs=1): err= 0: pid=638986: Sat Jul 27 01:24:03 2024 00:16:12.347 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:16:12.347 slat (nsec): min=10867, max=14971, avg=13977.00, stdev=827.60 00:16:12.347 clat (usec): min=40933, max=41976, avg=41047.23, stdev=231.86 00:16:12.347 lat (usec): min=40947, max=41990, avg=41061.21, stdev=231.64 00:16:12.347 clat percentiles (usec): 00:16:12.347 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:12.347 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:12.347 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:12.347 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:12.347 | 99.99th=[42206] 00:16:12.347 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:16:12.347 slat (usec): min=10, max=1934, avg=16.16, stdev=84.99 00:16:12.347 clat (usec): min=211, max=362, avg=260.09, stdev=26.35 00:16:12.347 lat (usec): min=223, max=2178, avg=276.25, stdev=88.32 00:16:12.347 clat percentiles (usec): 00:16:12.347 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:16:12.347 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 262], 00:16:12.347 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 302], 00:16:12.347 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 363], 99.95th=[ 363], 00:16:12.347 | 99.99th=[ 363] 00:16:12.347 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:16:12.347 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:12.347 lat (usec) : 250=42.21%, 500=53.85% 00:16:12.347 lat (msec) : 50=3.94% 00:16:12.347 cpu : usr=0.20%, sys=0.80%, ctx=538, majf=0, minf=2 00:16:12.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.347 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.347 job3: (groupid=0, jobs=1): err= 0: pid=638987: Sat Jul 27 01:24:03 2024 00:16:12.347 read: IOPS=135, BW=543KiB/s (556kB/s)(560KiB/1032msec) 00:16:12.347 slat (nsec): min=5681, max=62356, avg=20976.41, stdev=11313.69 00:16:12.347 clat (usec): min=314, max=41941, avg=6191.40, stdev=14255.64 00:16:12.347 lat (usec): min=326, max=41973, avg=6212.38, stdev=14256.55 00:16:12.347 clat percentiles (usec): 00:16:12.347 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 338], 00:16:12.347 | 30.00th=[ 363], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 400], 00:16:12.347 | 70.00th=[ 465], 80.00th=[ 498], 90.00th=[41157], 95.00th=[41157], 00:16:12.347 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:12.347 | 99.99th=[41681] 00:16:12.347 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:16:12.347 slat (nsec): min=7158, max=53077, avg=13206.22, stdev=6459.53 00:16:12.347 clat (usec): min=206, max=3420, avg=297.23, stdev=148.69 00:16:12.347 lat (usec): min=214, max=3430, avg=310.43, stdev=149.07 00:16:12.347 clat percentiles (usec): 00:16:12.347 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:16:12.347 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 293], 00:16:12.347 | 70.00th=[ 302], 80.00th=[ 322], 90.00th=[ 379], 95.00th=[ 412], 00:16:12.347 | 99.00th=[ 461], 99.50th=[ 553], 99.90th=[ 3425], 99.95th=[ 3425], 00:16:12.347 | 99.99th=[ 3425] 00:16:12.347 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:16:12.347 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:12.347 lat (usec) : 250=18.40%, 500=76.99%, 750=1.38% 00:16:12.347 lat (msec) : 4=0.15%, 50=3.07% 00:16:12.347 cpu : usr=0.48%, sys=1.07%, ctx=654, majf=0, minf=1 00:16:12.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.347 issued rwts: total=140,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.347 00:16:12.347 Run status group 0 (all jobs): 00:16:12.347 READ: bw=6566KiB/s (6723kB/s), 83.5KiB/s-5419KiB/s (85.5kB/s-5549kB/s), io=6776KiB (6939kB), run=1001-1032msec 00:16:12.347 WRITE: bw=11.6MiB/s (12.2MB/s), 1984KiB/s-6138KiB/s (2032kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1032msec 00:16:12.347 00:16:12.347 Disk stats (read/write): 00:16:12.347 nvme0n1: ios=111/512, merge=0/0, ticks=720/142, in_queue=862, util=86.77% 00:16:12.347 nvme0n2: ios=1051/1520, merge=0/0, ticks=448/347, in_queue=795, util=87.17% 00:16:12.347 nvme0n3: ios=81/512, merge=0/0, ticks=946/125, in_queue=1071, util=97.48% 00:16:12.347 nvme0n4: ios=33/512, merge=0/0, ticks=662/146, in_queue=808, util=89.50% 00:16:12.347 01:24:03 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:12.347 [global] 00:16:12.347 thread=1 00:16:12.347 invalidate=1 00:16:12.347 rw=randwrite 00:16:12.347 time_based=1 00:16:12.347 runtime=1 00:16:12.347 ioengine=libaio 00:16:12.347 direct=1 00:16:12.347 bs=4096 00:16:12.347 iodepth=1 00:16:12.347 norandommap=0 00:16:12.347 numjobs=1 00:16:12.347 00:16:12.347 verify_dump=1 00:16:12.347 verify_backlog=512 00:16:12.347 verify_state_save=0 00:16:12.347 do_verify=1 00:16:12.347 verify=crc32c-intel 00:16:12.347 [job0] 00:16:12.347 filename=/dev/nvme0n1 00:16:12.347 [job1] 00:16:12.347 filename=/dev/nvme0n2 00:16:12.347 [job2] 00:16:12.347 filename=/dev/nvme0n3 00:16:12.347 [job3] 00:16:12.347 filename=/dev/nvme0n4 00:16:12.347 Could not set queue depth (nvme0n1) 00:16:12.347 Could not set queue depth (nvme0n2) 00:16:12.347 Could not set queue depth (nvme0n3) 00:16:12.347 Could not set queue depth (nvme0n4) 00:16:12.605 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.605 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.605 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.605 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.605 fio-3.35 00:16:12.605 Starting 4 threads 00:16:13.984 00:16:13.984 job0: (groupid=0, jobs=1): err= 0: pid=639215: Sat Jul 27 01:24:05 2024 00:16:13.984 read: IOPS=111, BW=446KiB/s (456kB/s)(448KiB/1005msec) 00:16:13.984 slat (nsec): min=8798, max=51288, avg=15294.42, stdev=10053.51 00:16:13.984 clat (usec): min=386, max=42031, avg=7390.79, stdev=15438.55 00:16:13.984 lat (usec): min=397, max=42041, avg=7406.09, stdev=15446.10 00:16:13.984 clat percentiles (usec): 00:16:13.984 | 1.00th=[ 388], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 408], 00:16:13.984 | 30.00th=[ 416], 40.00th=[ 437], 50.00th=[ 449], 60.00th=[ 469], 00:16:13.984 | 70.00th=[ 490], 80.00th=[ 553], 90.00th=[41157], 95.00th=[41681], 00:16:13.984 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:13.984 | 99.99th=[42206] 00:16:13.984 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:16:13.984 slat (nsec): min=8298, max=63972, avg=20907.84, stdev=9510.45 00:16:13.984 clat (usec): min=211, max=570, avg=308.52, stdev=67.01 00:16:13.984 lat (usec): min=227, max=606, avg=329.43, stdev=69.42 00:16:13.984 clat percentiles (usec): 00:16:13.984 | 1.00th=[ 225], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 260], 00:16:13.984 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 297], 00:16:13.984 | 70.00th=[ 322], 80.00th=[ 351], 90.00th=[ 416], 95.00th=[ 457], 00:16:13.984 | 99.00th=[ 510], 99.50th=[ 570], 99.90th=[ 570], 99.95th=[ 570], 00:16:13.984 | 99.99th=[ 570] 00:16:13.984 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:16:13.984 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:13.984 lat (usec) : 250=7.05%, 500=86.70%, 750=3.21% 00:16:13.984 lat (msec) : 50=3.04% 00:16:13.984 cpu : usr=0.80%, sys=1.49%, ctx=625, majf=0, minf=1 00:16:13.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.985 issued rwts: total=112,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.985 job1: (groupid=0, jobs=1): err= 0: pid=639216: Sat Jul 27 01:24:05 2024 00:16:13.985 read: IOPS=533, BW=2135KiB/s (2186kB/s)(2188KiB/1025msec) 00:16:13.985 slat (nsec): min=7605, max=62847, avg=24828.50, stdev=9571.98 00:16:13.985 clat (usec): min=303, max=42193, avg=1319.17, stdev=5815.84 00:16:13.985 lat (usec): min=317, max=42256, avg=1344.00, stdev=5816.81 00:16:13.985 clat percentiles (usec): 00:16:13.985 | 1.00th=[ 322], 5.00th=[ 351], 10.00th=[ 375], 20.00th=[ 424], 00:16:13.985 | 30.00th=[ 461], 40.00th=[ 482], 50.00th=[ 494], 60.00th=[ 510], 00:16:13.985 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 594], 95.00th=[ 627], 00:16:13.985 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:13.985 | 99.99th=[42206] 00:16:13.985 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:16:13.985 slat (nsec): min=6604, max=59392, avg=18760.44, stdev=7737.69 00:16:13.985 clat (usec): min=195, max=897, avg=254.59, stdev=54.84 00:16:13.985 lat (usec): min=213, max=905, avg=273.35, stdev=55.30 00:16:13.985 clat percentiles (usec): 00:16:13.985 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:16:13.985 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:16:13.985 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 326], 95.00th=[ 379], 00:16:13.985 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 478], 99.95th=[ 898], 00:16:13.985 | 99.99th=[ 898] 00:16:13.985 bw ( KiB/s): min= 2880, max= 5301, per=41.50%, avg=4090.50, stdev=1711.91, samples=2 00:16:13.985 iops : min= 720, max= 1325, avg=1022.50, stdev=427.80, samples=2 00:16:13.985 lat (usec) : 250=41.69%, 500=42.27%, 750=15.28%, 1000=0.06% 00:16:13.985 lat (msec) : 50=0.70% 00:16:13.985 cpu : usr=1.66%, sys=3.32%, ctx=1574, majf=0, minf=1 00:16:13.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.985 issued rwts: total=547,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.985 job2: (groupid=0, jobs=1): err= 0: pid=639219: Sat Jul 27 01:24:05 2024 00:16:13.985 read: IOPS=363, BW=1455KiB/s (1490kB/s)(1512KiB/1039msec) 00:16:13.985 slat (nsec): min=6708, max=64922, avg=21137.33, stdev=9765.38 00:16:13.985 clat (usec): min=329, max=41503, avg=2253.85, stdev=8437.22 00:16:13.985 lat (usec): min=337, max=41521, avg=2274.99, stdev=8438.99 00:16:13.985 clat percentiles (usec): 00:16:13.985 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:16:13.985 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 412], 00:16:13.985 | 70.00th=[ 420], 80.00th=[ 433], 90.00th=[ 482], 95.00th=[ 3654], 00:16:13.985 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:13.985 | 99.99th=[41681] 00:16:13.985 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:16:13.985 slat (nsec): min=6639, max=69776, avg=19700.71, stdev=9815.98 00:16:13.985 clat (usec): min=212, max=2394, avg=313.10, stdev=113.96 00:16:13.985 lat (usec): min=235, max=2411, avg=332.80, stdev=114.47 00:16:13.985 clat percentiles (usec): 00:16:13.985 | 1.00th=[ 221], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 262], 00:16:13.985 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 306], 00:16:13.985 | 70.00th=[ 326], 80.00th=[ 359], 90.00th=[ 396], 95.00th=[ 424], 00:16:13.985 | 99.00th=[ 562], 99.50th=[ 701], 99.90th=[ 2409], 99.95th=[ 2409], 00:16:13.985 | 99.99th=[ 2409] 00:16:13.985 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:16:13.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:13.985 lat (usec) : 250=5.62%, 500=90.00%, 750=2.02%, 1000=0.11% 00:16:13.985 lat (msec) : 4=0.22%, 10=0.11%, 50=1.91% 00:16:13.985 cpu : usr=0.87%, sys=1.83%, ctx=893, majf=0, minf=1 00:16:13.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.985 issued rwts: total=378,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.985 job3: (groupid=0, jobs=1): err= 0: pid=639220: Sat Jul 27 01:24:05 2024 00:16:13.985 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:16:13.985 slat (nsec): min=13298, max=44907, avg=29105.09, stdev=9962.10 00:16:13.985 clat (usec): min=1015, max=42120, avg=38399.32, stdev=10448.50 00:16:13.985 lat (usec): min=1033, max=42144, avg=38428.42, stdev=10452.04 00:16:13.985 clat percentiles (usec): 00:16:13.985 | 1.00th=[ 1012], 5.00th=[12125], 10.00th=[40633], 20.00th=[41157], 00:16:13.985 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:16:13.985 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:13.985 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:13.985 | 99.99th=[42206] 00:16:13.985 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:16:13.985 slat (nsec): min=6994, max=60957, avg=19595.95, stdev=9904.47 00:16:13.985 clat (usec): min=212, max=578, avg=287.91, stdev=47.09 00:16:13.985 lat (usec): min=228, max=586, avg=307.51, stdev=47.58 00:16:13.985 clat percentiles (usec): 00:16:13.985 | 1.00th=[ 227], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:16:13.985 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:16:13.985 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 351], 95.00th=[ 388], 00:16:13.985 | 99.00th=[ 433], 99.50th=[ 515], 99.90th=[ 578], 99.95th=[ 578], 00:16:13.985 | 99.99th=[ 578] 00:16:13.985 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:16:13.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:13.985 lat (usec) : 250=13.11%, 500=82.21%, 750=0.56% 00:16:13.985 lat (msec) : 2=0.19%, 20=0.19%, 50=3.75% 00:16:13.985 cpu : usr=0.20%, sys=1.69%, ctx=535, majf=0, minf=2 00:16:13.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.985 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.985 00:16:13.985 Run status group 0 (all jobs): 00:16:13.985 READ: bw=4077KiB/s (4175kB/s), 87.2KiB/s-2135KiB/s (89.3kB/s-2186kB/s), io=4236KiB (4338kB), run=1005-1039msec 00:16:13.985 WRITE: bw=9856KiB/s (10.1MB/s), 1971KiB/s-3996KiB/s (2018kB/s-4092kB/s), io=10.0MiB (10.5MB), run=1005-1039msec 00:16:13.985 00:16:13.985 Disk stats (read/write): 00:16:13.985 nvme0n1: ios=150/512, merge=0/0, ticks=847/145, in_queue=992, util=96.79% 00:16:13.985 nvme0n2: ios=568/1024, merge=0/0, ticks=1492/241, in_queue=1733, util=97.36% 00:16:13.985 nvme0n3: ios=398/512, merge=0/0, ticks=1599/156, in_queue=1755, util=97.39% 00:16:13.985 nvme0n4: ios=43/512, merge=0/0, ticks=1622/141, in_queue=1763, util=97.47% 00:16:13.985 01:24:05 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:13.985 [global] 00:16:13.985 thread=1 00:16:13.985 invalidate=1 00:16:13.985 rw=write 00:16:13.985 time_based=1 00:16:13.985 runtime=1 00:16:13.985 ioengine=libaio 00:16:13.985 direct=1 00:16:13.985 bs=4096 00:16:13.985 iodepth=128 00:16:13.985 norandommap=0 00:16:13.985 numjobs=1 00:16:13.985 00:16:13.985 verify_dump=1 00:16:13.985 verify_backlog=512 00:16:13.985 verify_state_save=0 00:16:13.985 do_verify=1 00:16:13.985 verify=crc32c-intel 00:16:13.985 [job0] 00:16:13.985 filename=/dev/nvme0n1 00:16:13.985 [job1] 00:16:13.985 filename=/dev/nvme0n2 00:16:13.985 [job2] 00:16:13.985 filename=/dev/nvme0n3 00:16:13.985 [job3] 00:16:13.985 filename=/dev/nvme0n4 00:16:13.985 Could not set queue depth (nvme0n1) 00:16:13.985 Could not set queue depth (nvme0n2) 00:16:13.985 Could not set queue depth (nvme0n3) 00:16:13.985 Could not set queue depth (nvme0n4) 00:16:13.985 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.985 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.985 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.985 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.985 fio-3.35 00:16:13.985 Starting 4 threads 00:16:15.364 00:16:15.364 job0: (groupid=0, jobs=1): err= 0: pid=639459: Sat Jul 27 01:24:06 2024 00:16:15.364 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:16:15.364 slat (usec): min=3, max=14896, avg=117.33, stdev=896.88 00:16:15.364 clat (usec): min=2991, max=37846, avg=17384.85, stdev=5876.49 00:16:15.364 lat (usec): min=3174, max=37864, avg=17502.18, stdev=5937.56 00:16:15.364 clat percentiles (usec): 00:16:15.364 | 1.00th=[ 5342], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[11469], 00:16:15.364 | 30.00th=[13698], 40.00th=[15664], 50.00th=[17171], 60.00th=[17957], 00:16:15.364 | 70.00th=[21365], 80.00th=[22938], 90.00th=[24249], 95.00th=[27132], 00:16:15.364 | 99.00th=[31589], 99.50th=[31851], 99.90th=[36439], 99.95th=[36439], 00:16:15.364 | 99.99th=[38011] 00:16:15.364 write: IOPS=3910, BW=15.3MiB/s (16.0MB/s)(15.5MiB/1012msec); 0 zone resets 00:16:15.364 slat (usec): min=3, max=12862, avg=125.30, stdev=826.54 00:16:15.364 clat (usec): min=1346, max=86401, avg=16490.75, stdev=15285.87 00:16:15.364 lat (usec): min=1359, max=86413, avg=16616.05, stdev=15384.16 00:16:15.364 clat percentiles (usec): 00:16:15.364 | 1.00th=[ 4817], 5.00th=[ 6521], 10.00th=[ 7373], 20.00th=[ 8455], 00:16:15.364 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[11600], 60.00th=[13304], 00:16:15.364 | 70.00th=[15401], 80.00th=[19006], 90.00th=[31851], 95.00th=[57934], 00:16:15.364 | 99.00th=[79168], 99.50th=[84411], 99.90th=[86508], 99.95th=[86508], 00:16:15.364 | 99.99th=[86508] 00:16:15.364 bw ( KiB/s): min=10152, max=20480, per=22.68%, avg=15316.00, stdev=7303.00, samples=2 00:16:15.364 iops : min= 2538, max= 5120, avg=3829.00, stdev=1825.75, samples=2 00:16:15.364 lat (msec) : 2=0.13%, 4=0.13%, 10=27.75%, 20=47.14%, 50=21.99% 00:16:15.364 lat (msec) : 100=2.85% 00:16:15.364 cpu : usr=7.32%, sys=8.70%, ctx=211, majf=0, minf=1 00:16:15.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:15.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.364 issued rwts: total=3584,3957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.364 job1: (groupid=0, jobs=1): err= 0: pid=639460: Sat Jul 27 01:24:06 2024 00:16:15.364 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:16:15.364 slat (usec): min=2, max=21472, avg=144.44, stdev=1179.16 00:16:15.364 clat (usec): min=5347, max=56861, avg=20423.46, stdev=9166.59 00:16:15.364 lat (usec): min=7122, max=56899, avg=20567.89, stdev=9257.63 00:16:15.364 clat percentiles (usec): 00:16:15.364 | 1.00th=[ 8455], 5.00th=[10159], 10.00th=[11076], 20.00th=[11994], 00:16:15.364 | 30.00th=[14222], 40.00th=[15008], 50.00th=[18744], 60.00th=[20579], 00:16:15.364 | 70.00th=[23462], 80.00th=[29230], 90.00th=[34341], 95.00th=[38536], 00:16:15.364 | 99.00th=[45351], 99.50th=[45351], 99.90th=[50070], 99.95th=[53216], 00:16:15.364 | 99.99th=[56886] 00:16:15.364 write: IOPS=3917, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1006msec); 0 zone resets 00:16:15.364 slat (usec): min=3, max=11593, avg=99.20, stdev=740.44 00:16:15.364 clat (usec): min=3221, max=35801, avg=13710.05, stdev=5386.86 00:16:15.364 lat (usec): min=4575, max=35812, avg=13809.25, stdev=5426.40 00:16:15.364 clat percentiles (usec): 00:16:15.364 | 1.00th=[ 5735], 5.00th=[ 7046], 10.00th=[ 7832], 20.00th=[ 9110], 00:16:15.364 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11600], 60.00th=[13698], 00:16:15.364 | 70.00th=[15926], 80.00th=[20055], 90.00th=[22152], 95.00th=[23200], 00:16:15.364 | 99.00th=[26346], 99.50th=[26870], 99.90th=[35914], 99.95th=[35914], 00:16:15.364 | 99.99th=[35914] 00:16:15.364 bw ( KiB/s): min=14576, max=15928, per=22.58%, avg=15252.00, stdev=956.01, samples=2 00:16:15.364 iops : min= 3644, max= 3982, avg=3813.00, stdev=239.00, samples=2 00:16:15.364 lat (msec) : 4=0.01%, 10=17.08%, 20=52.50%, 50=30.37%, 100=0.04% 00:16:15.364 cpu : usr=5.37%, sys=9.45%, ctx=205, majf=0, minf=1 00:16:15.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:15.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.364 issued rwts: total=3584,3941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.364 job2: (groupid=0, jobs=1): err= 0: pid=639461: Sat Jul 27 01:24:06 2024 00:16:15.364 read: IOPS=4534, BW=17.7MiB/s (18.6MB/s)(18.5MiB/1044msec) 00:16:15.364 slat (usec): min=3, max=19912, avg=94.91, stdev=541.51 00:16:15.364 clat (usec): min=8766, max=56054, avg=13994.87, stdev=6582.54 00:16:15.364 lat (usec): min=9289, max=56076, avg=14089.78, stdev=6592.61 00:16:15.364 clat percentiles (usec): 00:16:15.364 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10945], 20.00th=[11731], 00:16:15.364 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12649], 00:16:15.364 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15139], 95.00th=[22938], 00:16:15.364 | 99.00th=[49021], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:16:15.364 | 99.99th=[55837] 00:16:15.364 write: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1044msec); 0 zone resets 00:16:15.364 slat (usec): min=3, max=19846, avg=93.33, stdev=525.37 00:16:15.364 clat (usec): min=4122, max=56085, avg=12767.98, stdev=3641.87 00:16:15.364 lat (usec): min=4134, max=56102, avg=12861.31, stdev=3652.15 00:16:15.364 clat percentiles (usec): 00:16:15.364 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11469], 00:16:15.364 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:16:15.364 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13829], 95.00th=[15664], 00:16:15.364 | 99.00th=[29492], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:16:15.364 | 99.99th=[55837] 00:16:15.364 bw ( KiB/s): min=19672, max=21272, per=30.31%, avg=20472.00, stdev=1131.37, samples=2 00:16:15.364 iops : min= 4918, max= 5318, avg=5118.00, stdev=282.84, samples=2 00:16:15.364 lat (msec) : 10=4.69%, 20=90.04%, 50=4.84%, 100=0.43% 00:16:15.364 cpu : usr=7.48%, sys=13.42%, ctx=530, majf=0, minf=1 00:16:15.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:15.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.364 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.364 job3: (groupid=0, jobs=1): err= 0: pid=639462: Sat Jul 27 01:24:06 2024 00:16:15.365 read: IOPS=4318, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1001msec) 00:16:15.365 slat (usec): min=3, max=3993, avg=96.22, stdev=393.83 00:16:15.365 clat (usec): min=663, max=23711, avg=13015.91, stdev=2567.63 00:16:15.365 lat (usec): min=1236, max=23729, avg=13112.13, stdev=2562.14 00:16:15.365 clat percentiles (usec): 00:16:15.365 | 1.00th=[ 6259], 5.00th=[10683], 10.00th=[11338], 20.00th=[11863], 00:16:15.365 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:16:15.365 | 70.00th=[12911], 80.00th=[13435], 90.00th=[15139], 95.00th=[19530], 00:16:15.365 | 99.00th=[22414], 99.50th=[22414], 99.90th=[23725], 99.95th=[23725], 00:16:15.365 | 99.99th=[23725] 00:16:15.365 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:16:15.365 slat (usec): min=4, max=18858, avg=112.74, stdev=675.89 00:16:15.365 clat (usec): min=9254, max=56129, avg=15197.51, stdev=7945.75 00:16:15.365 lat (usec): min=9269, max=56149, avg=15310.25, stdev=7981.30 00:16:15.365 clat percentiles (usec): 00:16:15.365 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:16:15.365 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13173], 00:16:15.365 | 70.00th=[13435], 80.00th=[13829], 90.00th=[22414], 95.00th=[36963], 00:16:15.365 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55837], 99.95th=[56361], 00:16:15.365 | 99.99th=[56361] 00:16:15.365 bw ( KiB/s): min=16384, max=20480, per=27.29%, avg=18432.00, stdev=2896.31, samples=2 00:16:15.365 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:16:15.365 lat (usec) : 750=0.01% 00:16:15.365 lat (msec) : 2=0.01%, 4=0.36%, 10=1.59%, 20=89.17%, 50=8.16% 00:16:15.365 lat (msec) : 100=0.69% 00:16:15.365 cpu : usr=7.70%, sys=12.90%, ctx=541, majf=0, minf=1 00:16:15.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:15.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.365 issued rwts: total=4323,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.365 00:16:15.365 Run status group 0 (all jobs): 00:16:15.365 READ: bw=60.7MiB/s (63.7MB/s), 13.8MiB/s-17.7MiB/s (14.5MB/s-18.6MB/s), io=63.4MiB (66.5MB), run=1001-1044msec 00:16:15.365 WRITE: bw=65.9MiB/s (69.2MB/s), 15.3MiB/s-19.2MiB/s (16.0MB/s-20.1MB/s), io=68.9MiB (72.2MB), run=1001-1044msec 00:16:15.365 00:16:15.365 Disk stats (read/write): 00:16:15.365 nvme0n1: ios=3527/3584, merge=0/0, ticks=57540/41026, in_queue=98566, util=95.59% 00:16:15.365 nvme0n2: ios=2672/3072, merge=0/0, ticks=49303/38423, in_queue=87726, util=99.09% 00:16:15.365 nvme0n3: ios=4096/4131, merge=0/0, ticks=21425/17711, in_queue=39136, util=88.91% 00:16:15.365 nvme0n4: ios=3584/3751, merge=0/0, ticks=11661/13510, in_queue=25171, util=89.66% 00:16:15.365 01:24:06 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:15.365 [global] 00:16:15.365 thread=1 00:16:15.365 invalidate=1 00:16:15.365 rw=randwrite 00:16:15.365 time_based=1 00:16:15.365 runtime=1 00:16:15.365 ioengine=libaio 00:16:15.365 direct=1 00:16:15.365 bs=4096 00:16:15.365 iodepth=128 00:16:15.365 norandommap=0 00:16:15.365 numjobs=1 00:16:15.365 00:16:15.365 verify_dump=1 00:16:15.365 verify_backlog=512 00:16:15.365 verify_state_save=0 00:16:15.365 do_verify=1 00:16:15.365 verify=crc32c-intel 00:16:15.365 [job0] 00:16:15.365 filename=/dev/nvme0n1 00:16:15.365 [job1] 00:16:15.365 filename=/dev/nvme0n2 00:16:15.365 [job2] 00:16:15.365 filename=/dev/nvme0n3 00:16:15.365 [job3] 00:16:15.365 filename=/dev/nvme0n4 00:16:15.365 Could not set queue depth (nvme0n1) 00:16:15.365 Could not set queue depth (nvme0n2) 00:16:15.365 Could not set queue depth (nvme0n3) 00:16:15.365 Could not set queue depth (nvme0n4) 00:16:15.365 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.365 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.365 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.365 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.365 fio-3.35 00:16:15.365 Starting 4 threads 00:16:16.740 00:16:16.740 job0: (groupid=0, jobs=1): err= 0: pid=639813: Sat Jul 27 01:24:08 2024 00:16:16.740 read: IOPS=2348, BW=9395KiB/s (9621kB/s)(9508KiB/1012msec) 00:16:16.740 slat (usec): min=3, max=30308, avg=188.17, stdev=1361.28 00:16:16.740 clat (usec): min=9174, max=65690, avg=22083.35, stdev=12227.95 00:16:16.740 lat (usec): min=10664, max=65701, avg=22271.52, stdev=12337.78 00:16:16.740 clat percentiles (usec): 00:16:16.740 | 1.00th=[10683], 5.00th=[13173], 10.00th=[13566], 20.00th=[13829], 00:16:16.740 | 30.00th=[13960], 40.00th=[14484], 50.00th=[15533], 60.00th=[16909], 00:16:16.740 | 70.00th=[25560], 80.00th=[33162], 90.00th=[39584], 95.00th=[48497], 00:16:16.740 | 99.00th=[61604], 99.50th=[61604], 99.90th=[65799], 99.95th=[65799], 00:16:16.740 | 99.99th=[65799] 00:16:16.740 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:16:16.740 slat (usec): min=5, max=20260, avg=194.25, stdev=1085.87 00:16:16.740 clat (usec): min=9094, max=96018, avg=29241.60, stdev=16447.13 00:16:16.740 lat (usec): min=9643, max=96030, avg=29435.85, stdev=16551.47 00:16:16.740 clat percentiles (usec): 00:16:16.740 | 1.00th=[10552], 5.00th=[12518], 10.00th=[13435], 20.00th=[18220], 00:16:16.740 | 30.00th=[20317], 40.00th=[21365], 50.00th=[23200], 60.00th=[26608], 00:16:16.740 | 70.00th=[31589], 80.00th=[41157], 90.00th=[51643], 95.00th=[66323], 00:16:16.740 | 99.00th=[87557], 99.50th=[91751], 99.90th=[95945], 99.95th=[95945], 00:16:16.740 | 99.99th=[95945] 00:16:16.740 bw ( KiB/s): min=10232, max=10248, per=15.78%, avg=10240.00, stdev=11.31, samples=2 00:16:16.740 iops : min= 2558, max= 2562, avg=2560.00, stdev= 2.83, samples=2 00:16:16.740 lat (msec) : 10=0.30%, 20=46.32%, 50=46.22%, 100=7.15% 00:16:16.740 cpu : usr=3.46%, sys=3.86%, ctx=296, majf=0, minf=13 00:16:16.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:16.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.740 issued rwts: total=2377,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.740 job1: (groupid=0, jobs=1): err= 0: pid=639815: Sat Jul 27 01:24:08 2024 00:16:16.740 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:16:16.740 slat (usec): min=3, max=13824, avg=125.64, stdev=862.19 00:16:16.740 clat (usec): min=5901, max=37315, avg=15397.42, stdev=4889.84 00:16:16.740 lat (usec): min=5914, max=37329, avg=15523.06, stdev=4932.52 00:16:16.740 clat percentiles (usec): 00:16:16.740 | 1.00th=[ 6390], 5.00th=[10290], 10.00th=[11600], 20.00th=[11863], 00:16:16.740 | 30.00th=[12256], 40.00th=[13304], 50.00th=[13960], 60.00th=[14615], 00:16:16.740 | 70.00th=[16319], 80.00th=[18220], 90.00th=[22938], 95.00th=[25822], 00:16:16.740 | 99.00th=[30540], 99.50th=[31589], 99.90th=[37487], 99.95th=[37487], 00:16:16.740 | 99.99th=[37487] 00:16:16.740 write: IOPS=3945, BW=15.4MiB/s (16.2MB/s)(15.6MiB/1011msec); 0 zone resets 00:16:16.740 slat (usec): min=3, max=11040, avg=130.40, stdev=689.72 00:16:16.740 clat (usec): min=1165, max=44475, avg=18332.46, stdev=8878.33 00:16:16.740 lat (usec): min=1179, max=44496, avg=18462.87, stdev=8934.89 00:16:16.740 clat percentiles (usec): 00:16:16.740 | 1.00th=[ 4293], 5.00th=[ 7177], 10.00th=[ 8455], 20.00th=[10552], 00:16:16.740 | 30.00th=[11994], 40.00th=[13435], 50.00th=[17433], 60.00th=[20579], 00:16:16.740 | 70.00th=[21365], 80.00th=[25297], 90.00th=[32900], 95.00th=[35914], 00:16:16.740 | 99.00th=[39584], 99.50th=[40633], 99.90th=[44303], 99.95th=[44303], 00:16:16.740 | 99.99th=[44303] 00:16:16.740 bw ( KiB/s): min=14520, max=16368, per=23.81%, avg=15444.00, stdev=1306.73, samples=2 00:16:16.740 iops : min= 3630, max= 4092, avg=3861.00, stdev=326.68, samples=2 00:16:16.740 lat (msec) : 2=0.07%, 4=0.37%, 10=10.19%, 20=58.19%, 50=31.18% 00:16:16.740 cpu : usr=3.56%, sys=6.44%, ctx=380, majf=0, minf=11 00:16:16.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:16.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.740 issued rwts: total=3584,3989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.740 job2: (groupid=0, jobs=1): err= 0: pid=639816: Sat Jul 27 01:24:08 2024 00:16:16.740 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:16:16.740 slat (usec): min=3, max=8009, avg=107.98, stdev=623.43 00:16:16.740 clat (usec): min=7728, max=39249, avg=14113.62, stdev=4319.19 00:16:16.740 lat (usec): min=7892, max=39255, avg=14221.60, stdev=4351.19 00:16:16.740 clat percentiles (usec): 00:16:16.740 | 1.00th=[ 8356], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11469], 00:16:16.740 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12780], 00:16:16.740 | 70.00th=[13829], 80.00th=[16712], 90.00th=[21365], 95.00th=[23987], 00:16:16.740 | 99.00th=[29230], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:16:16.740 | 99.99th=[39060] 00:16:16.740 write: IOPS=4725, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1004msec); 0 zone resets 00:16:16.740 slat (usec): min=4, max=12714, avg=96.98, stdev=572.81 00:16:16.740 clat (usec): min=3725, max=30415, avg=13023.18, stdev=2998.23 00:16:16.740 lat (usec): min=3763, max=35549, avg=13120.17, stdev=3031.01 00:16:16.741 clat percentiles (usec): 00:16:16.741 | 1.00th=[ 5014], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[11600], 00:16:16.741 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:16:16.741 | 70.00th=[13173], 80.00th=[14746], 90.00th=[16581], 95.00th=[18744], 00:16:16.741 | 99.00th=[25560], 99.50th=[25822], 99.90th=[27132], 99.95th=[27132], 00:16:16.741 | 99.99th=[30540] 00:16:16.741 bw ( KiB/s): min=16888, max=20120, per=28.52%, avg=18504.00, stdev=2285.37, samples=2 00:16:16.741 iops : min= 4222, max= 5030, avg=4626.00, stdev=571.34, samples=2 00:16:16.741 lat (msec) : 4=0.10%, 10=7.75%, 20=84.30%, 50=7.85% 00:16:16.741 cpu : usr=5.48%, sys=7.78%, ctx=517, majf=0, minf=13 00:16:16.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:16.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.741 issued rwts: total=4608,4744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.741 job3: (groupid=0, jobs=1): err= 0: pid=639817: Sat Jul 27 01:24:08 2024 00:16:16.741 read: IOPS=4678, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1008msec) 00:16:16.741 slat (usec): min=2, max=13254, avg=106.66, stdev=798.23 00:16:16.741 clat (usec): min=4777, max=57917, avg=14059.36, stdev=4602.04 00:16:16.741 lat (usec): min=4790, max=61118, avg=14166.02, stdev=4649.72 00:16:16.741 clat percentiles (usec): 00:16:16.741 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10945], 00:16:16.741 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12780], 60.00th=[14091], 00:16:16.741 | 70.00th=[15139], 80.00th=[17171], 90.00th=[19530], 95.00th=[20579], 00:16:16.741 | 99.00th=[30540], 99.50th=[30802], 99.90th=[51643], 99.95th=[51643], 00:16:16.741 | 99.99th=[57934] 00:16:16.741 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:16:16.741 slat (usec): min=3, max=13951, avg=77.60, stdev=591.48 00:16:16.741 clat (usec): min=1422, max=28268, avg=12025.11, stdev=3746.16 00:16:16.741 lat (usec): min=1430, max=28273, avg=12102.71, stdev=3762.38 00:16:16.741 clat percentiles (usec): 00:16:16.741 | 1.00th=[ 3785], 5.00th=[ 6718], 10.00th=[ 7570], 20.00th=[ 8717], 00:16:16.741 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11994], 60.00th=[12780], 00:16:16.741 | 70.00th=[13698], 80.00th=[15139], 90.00th=[16909], 95.00th=[17695], 00:16:16.741 | 99.00th=[21890], 99.50th=[21890], 99.90th=[24249], 99.95th=[25035], 00:16:16.741 | 99.99th=[28181] 00:16:16.741 bw ( KiB/s): min=20320, max=20480, per=31.45%, avg=20400.00, stdev=113.14, samples=2 00:16:16.741 iops : min= 5080, max= 5120, avg=5100.00, stdev=28.28, samples=2 00:16:16.741 lat (msec) : 2=0.10%, 4=0.48%, 10=20.01%, 20=73.95%, 50=5.29% 00:16:16.741 lat (msec) : 100=0.17% 00:16:16.741 cpu : usr=3.57%, sys=7.75%, ctx=345, majf=0, minf=13 00:16:16.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:16.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.741 issued rwts: total=4716,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.741 00:16:16.741 Run status group 0 (all jobs): 00:16:16.741 READ: bw=59.0MiB/s (61.9MB/s), 9395KiB/s-18.3MiB/s (9621kB/s-19.2MB/s), io=59.7MiB (62.6MB), run=1004-1012msec 00:16:16.741 WRITE: bw=63.4MiB/s (66.4MB/s), 9.88MiB/s-19.8MiB/s (10.4MB/s-20.8MB/s), io=64.1MiB (67.2MB), run=1004-1012msec 00:16:16.741 00:16:16.741 Disk stats (read/write): 00:16:16.741 nvme0n1: ios=2071/2431, merge=0/0, ticks=23495/29079, in_queue=52574, util=99.10% 00:16:16.741 nvme0n2: ios=2834/3072, merge=0/0, ticks=43185/61856, in_queue=105041, util=87.59% 00:16:16.741 nvme0n3: ios=3584/4096, merge=0/0, ticks=25895/25506, in_queue=51401, util=88.90% 00:16:16.741 nvme0n4: ios=4080/4111, merge=0/0, ticks=53153/46540, in_queue=99693, util=99.26% 00:16:16.741 01:24:08 -- target/fio.sh@55 -- # sync 00:16:16.741 01:24:08 -- target/fio.sh@59 -- # fio_pid=640315 00:16:16.741 01:24:08 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:16.741 01:24:08 -- target/fio.sh@61 -- # sleep 3 00:16:16.741 [global] 00:16:16.741 thread=1 00:16:16.741 invalidate=1 00:16:16.741 rw=read 00:16:16.741 time_based=1 00:16:16.741 runtime=10 00:16:16.741 ioengine=libaio 00:16:16.741 direct=1 00:16:16.741 bs=4096 00:16:16.741 iodepth=1 00:16:16.741 norandommap=1 00:16:16.741 numjobs=1 00:16:16.741 00:16:16.741 [job0] 00:16:16.741 filename=/dev/nvme0n1 00:16:16.741 [job1] 00:16:16.741 filename=/dev/nvme0n2 00:16:16.741 [job2] 00:16:16.741 filename=/dev/nvme0n3 00:16:16.741 [job3] 00:16:16.741 filename=/dev/nvme0n4 00:16:16.741 Could not set queue depth (nvme0n1) 00:16:16.741 Could not set queue depth (nvme0n2) 00:16:16.741 Could not set queue depth (nvme0n3) 00:16:16.741 Could not set queue depth (nvme0n4) 00:16:16.999 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.999 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.999 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.999 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.999 fio-3.35 00:16:16.999 Starting 4 threads 00:16:19.566 01:24:11 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:19.824 01:24:11 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:19.824 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=24776704, buflen=4096 00:16:19.824 fio: pid=640467, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.082 01:24:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.082 01:24:11 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:20.082 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=17444864, buflen=4096 00:16:20.082 fio: pid=640465, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.339 01:24:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.339 01:24:12 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:20.597 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4423680, buflen=4096 00:16:20.597 fio: pid=640445, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.597 01:24:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.597 01:24:12 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:20.597 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=21745664, buflen=4096 00:16:20.597 fio: pid=640446, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.856 00:16:20.856 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=640445: Sat Jul 27 01:24:12 2024 00:16:20.856 read: IOPS=314, BW=1257KiB/s (1287kB/s)(4320KiB/3437msec) 00:16:20.856 slat (usec): min=5, max=33886, avg=51.44, stdev=1081.47 00:16:20.856 clat (usec): min=304, max=42513, avg=3107.45, stdev=10066.73 00:16:20.856 lat (usec): min=310, max=76027, avg=3148.90, stdev=10242.94 00:16:20.856 clat percentiles (usec): 00:16:20.856 | 1.00th=[ 322], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 408], 00:16:20.856 | 30.00th=[ 437], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 474], 00:16:20.856 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[41157], 00:16:20.856 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:16:20.856 | 99.99th=[42730] 00:16:20.856 bw ( KiB/s): min= 88, max= 5024, per=7.83%, avg=1426.67, stdev=2111.59, samples=6 00:16:20.856 iops : min= 22, max= 1256, avg=356.67, stdev=527.90, samples=6 00:16:20.856 lat (usec) : 500=73.08%, 750=20.26% 00:16:20.856 lat (msec) : 4=0.09%, 50=6.48% 00:16:20.856 cpu : usr=0.20%, sys=0.44%, ctx=1083, majf=0, minf=1 00:16:20.856 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.856 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.856 issued rwts: total=1081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.856 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.856 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=640446: Sat Jul 27 01:24:12 2024 00:16:20.856 read: IOPS=1448, BW=5791KiB/s (5930kB/s)(20.7MiB/3667msec) 00:16:20.856 slat (usec): min=4, max=30460, avg=24.65, stdev=491.94 00:16:20.856 clat (usec): min=298, max=47230, avg=658.36, stdev=3180.50 00:16:20.856 lat (usec): min=309, max=47242, avg=683.00, stdev=3218.61 00:16:20.856 clat percentiles (usec): 00:16:20.856 | 1.00th=[ 326], 5.00th=[ 343], 10.00th=[ 359], 20.00th=[ 379], 00:16:20.856 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 404], 60.00th=[ 416], 00:16:20.856 | 70.00th=[ 429], 80.00th=[ 441], 90.00th=[ 465], 95.00th=[ 502], 00:16:20.856 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:16:20.856 | 99.99th=[47449] 00:16:20.856 bw ( KiB/s): min= 96, max= 9792, per=32.25%, avg=5874.43, stdev=4129.23, samples=7 00:16:20.856 iops : min= 24, max= 2448, avg=1468.57, stdev=1032.29, samples=7 00:16:20.856 lat (usec) : 500=94.82%, 750=4.44%, 1000=0.08% 00:16:20.856 lat (msec) : 4=0.04%, 50=0.60% 00:16:20.856 cpu : usr=1.09%, sys=2.48%, ctx=5315, majf=0, minf=1 00:16:20.856 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.856 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.856 issued rwts: total=5310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.856 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.856 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=640465: Sat Jul 27 01:24:12 2024 00:16:20.856 read: IOPS=1354, BW=5415KiB/s (5545kB/s)(16.6MiB/3146msec) 00:16:20.856 slat (usec): min=5, max=16411, avg=20.24, stdev=314.31 00:16:20.856 clat (usec): min=303, max=42043, avg=708.87, stdev=3527.40 00:16:20.856 lat (usec): min=309, max=42060, avg=729.11, stdev=3541.60 00:16:20.856 clat percentiles (usec): 00:16:20.856 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 351], 00:16:20.856 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 400], 00:16:20.856 | 70.00th=[ 420], 80.00th=[ 445], 90.00th=[ 490], 95.00th=[ 545], 00:16:20.856 | 99.00th=[ 676], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:16:20.856 | 99.99th=[42206] 00:16:20.856 bw ( KiB/s): min= 96, max= 9888, per=29.04%, avg=5289.33, stdev=4525.37, samples=6 00:16:20.856 iops : min= 24, max= 2472, avg=1322.33, stdev=1131.34, samples=6 00:16:20.856 lat (usec) : 500=90.99%, 750=8.12%, 1000=0.07% 00:16:20.856 lat (msec) : 2=0.02%, 4=0.02%, 50=0.75% 00:16:20.856 cpu : usr=1.02%, sys=2.70%, ctx=4264, majf=0, minf=1 00:16:20.856 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.856 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.856 issued rwts: total=4260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.856 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.856 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=640467: Sat Jul 27 01:24:12 2024 00:16:20.856 read: IOPS=2093, BW=8372KiB/s (8573kB/s)(23.6MiB/2890msec) 00:16:20.856 slat (nsec): min=5360, max=57360, avg=12380.02, stdev=6558.07 00:16:20.857 clat (usec): min=365, max=41401, avg=458.16, stdev=533.32 00:16:20.857 lat (usec): min=371, max=41407, avg=470.54, stdev=533.63 00:16:20.857 clat percentiles (usec): 00:16:20.857 | 1.00th=[ 379], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 400], 00:16:20.857 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 433], 00:16:20.857 | 70.00th=[ 453], 80.00th=[ 490], 90.00th=[ 537], 95.00th=[ 644], 00:16:20.857 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 1029], 99.95th=[ 1631], 00:16:20.857 | 99.99th=[41157] 00:16:20.857 bw ( KiB/s): min= 7136, max= 9448, per=46.49%, avg=8468.80, stdev=980.03, samples=5 00:16:20.857 iops : min= 1784, max= 2362, avg=2117.20, stdev=245.01, samples=5 00:16:20.857 lat (usec) : 500=83.09%, 750=16.41%, 1000=0.36% 00:16:20.857 lat (msec) : 2=0.08%, 4=0.02%, 50=0.02% 00:16:20.857 cpu : usr=1.80%, sys=4.02%, ctx=6050, majf=0, minf=1 00:16:20.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.857 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.857 issued rwts: total=6050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.857 00:16:20.857 Run status group 0 (all jobs): 00:16:20.857 READ: bw=17.8MiB/s (18.7MB/s), 1257KiB/s-8372KiB/s (1287kB/s-8573kB/s), io=65.2MiB (68.4MB), run=2890-3667msec 00:16:20.857 00:16:20.857 Disk stats (read/write): 00:16:20.857 nvme0n1: ios=1078/0, merge=0/0, ticks=3262/0, in_queue=3262, util=94.91% 00:16:20.857 nvme0n2: ios=5308/0, merge=0/0, ticks=3369/0, in_queue=3369, util=94.86% 00:16:20.857 nvme0n3: ios=4243/0, merge=0/0, ticks=4130/0, in_queue=4130, util=98.13% 00:16:20.857 nvme0n4: ios=6048/0, merge=0/0, ticks=2674/0, in_queue=2674, util=96.74% 00:16:20.857 01:24:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.857 01:24:12 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:21.114 01:24:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:21.114 01:24:12 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:21.372 01:24:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:21.372 01:24:13 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:21.628 01:24:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:21.628 01:24:13 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:21.885 01:24:13 -- target/fio.sh@69 -- # fio_status=0 00:16:21.885 01:24:13 -- target/fio.sh@70 -- # wait 640315 00:16:21.885 01:24:13 -- target/fio.sh@70 -- # fio_status=4 00:16:21.885 01:24:13 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.142 01:24:13 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.142 01:24:13 -- common/autotest_common.sh@1198 -- # local i=0 00:16:22.142 01:24:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:22.142 01:24:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.142 01:24:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:22.142 01:24:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.142 01:24:13 -- common/autotest_common.sh@1210 -- # return 0 00:16:22.142 01:24:13 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:22.142 01:24:13 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:22.142 nvmf hotplug test: fio failed as expected 00:16:22.142 01:24:13 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.401 01:24:14 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:22.401 01:24:14 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:22.401 01:24:14 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:22.401 01:24:14 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:22.401 01:24:14 -- target/fio.sh@91 -- # nvmftestfini 00:16:22.401 01:24:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:22.401 01:24:14 -- nvmf/common.sh@116 -- # sync 00:16:22.401 01:24:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:22.401 01:24:14 -- nvmf/common.sh@119 -- # set +e 00:16:22.401 01:24:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:22.401 01:24:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:22.401 rmmod nvme_tcp 00:16:22.401 rmmod nvme_fabrics 00:16:22.401 rmmod nvme_keyring 00:16:22.401 01:24:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:22.401 01:24:14 -- nvmf/common.sh@123 -- # set -e 00:16:22.401 01:24:14 -- nvmf/common.sh@124 -- # return 0 00:16:22.401 01:24:14 -- nvmf/common.sh@477 -- # '[' -n 637754 ']' 00:16:22.401 01:24:14 -- nvmf/common.sh@478 -- # killprocess 637754 00:16:22.401 01:24:14 -- common/autotest_common.sh@926 -- # '[' -z 637754 ']' 00:16:22.401 01:24:14 -- common/autotest_common.sh@930 -- # kill -0 637754 00:16:22.401 01:24:14 -- common/autotest_common.sh@931 -- # uname 00:16:22.401 01:24:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:22.401 01:24:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 637754 00:16:22.401 01:24:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:22.401 01:24:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:22.401 01:24:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 637754' 00:16:22.401 killing process with pid 637754 00:16:22.401 01:24:14 -- common/autotest_common.sh@945 -- # kill 637754 00:16:22.401 01:24:14 -- common/autotest_common.sh@950 -- # wait 637754 00:16:22.659 01:24:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:22.659 01:24:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:22.659 01:24:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:22.659 01:24:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.659 01:24:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:22.659 01:24:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.659 01:24:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.659 01:24:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.193 01:24:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:25.193 00:16:25.193 real 0m23.709s 00:16:25.193 user 1m22.750s 00:16:25.193 sys 0m6.807s 00:16:25.193 01:24:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.193 01:24:16 -- common/autotest_common.sh@10 -- # set +x 00:16:25.193 ************************************ 00:16:25.193 END TEST nvmf_fio_target 00:16:25.193 ************************************ 00:16:25.193 01:24:16 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:25.193 01:24:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:25.193 01:24:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:25.193 01:24:16 -- common/autotest_common.sh@10 -- # set +x 00:16:25.193 ************************************ 00:16:25.193 START TEST nvmf_bdevio 00:16:25.193 ************************************ 00:16:25.193 01:24:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:25.193 * Looking for test storage... 00:16:25.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.193 01:24:16 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.193 01:24:16 -- nvmf/common.sh@7 -- # uname -s 00:16:25.193 01:24:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.193 01:24:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.193 01:24:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.193 01:24:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.193 01:24:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.193 01:24:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.193 01:24:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.193 01:24:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.193 01:24:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.193 01:24:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.193 01:24:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.193 01:24:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.193 01:24:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.193 01:24:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.193 01:24:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.193 01:24:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.193 01:24:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.193 01:24:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.193 01:24:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.193 01:24:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.193 01:24:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.193 01:24:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.193 01:24:16 -- paths/export.sh@5 -- # export PATH 00:16:25.193 01:24:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.193 01:24:16 -- nvmf/common.sh@46 -- # : 0 00:16:25.193 01:24:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:25.193 01:24:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:25.193 01:24:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:25.193 01:24:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.193 01:24:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.193 01:24:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:25.193 01:24:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:25.193 01:24:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:25.193 01:24:16 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.193 01:24:16 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.193 01:24:16 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:25.193 01:24:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:25.193 01:24:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.193 01:24:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:25.193 01:24:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:25.193 01:24:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:25.193 01:24:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.193 01:24:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.193 01:24:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.193 01:24:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:25.193 01:24:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:25.193 01:24:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:25.193 01:24:16 -- common/autotest_common.sh@10 -- # set +x 00:16:27.095 01:24:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:27.095 01:24:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:27.095 01:24:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:27.095 01:24:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:27.095 01:24:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:27.095 01:24:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:27.095 01:24:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:27.095 01:24:18 -- nvmf/common.sh@294 -- # net_devs=() 00:16:27.095 01:24:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:27.095 01:24:18 -- nvmf/common.sh@295 -- # e810=() 00:16:27.095 01:24:18 -- nvmf/common.sh@295 -- # local -ga e810 00:16:27.095 01:24:18 -- nvmf/common.sh@296 -- # x722=() 00:16:27.095 01:24:18 -- nvmf/common.sh@296 -- # local -ga x722 00:16:27.095 01:24:18 -- nvmf/common.sh@297 -- # mlx=() 00:16:27.095 01:24:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:27.095 01:24:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.095 01:24:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:27.095 01:24:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:27.095 01:24:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:27.095 01:24:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:27.095 01:24:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:27.095 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:27.095 01:24:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:27.095 01:24:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:27.095 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:27.095 01:24:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:27.095 01:24:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:27.095 01:24:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:27.095 01:24:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.095 01:24:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:27.095 01:24:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.095 01:24:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:27.095 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:27.095 01:24:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.095 01:24:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:27.095 01:24:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.095 01:24:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:27.095 01:24:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.095 01:24:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:27.095 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:27.095 01:24:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.095 01:24:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:27.096 01:24:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:27.096 01:24:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:27.096 01:24:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:27.096 01:24:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:27.096 01:24:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.096 01:24:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.096 01:24:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.096 01:24:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:27.096 01:24:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.096 01:24:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.096 01:24:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:27.096 01:24:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.096 01:24:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.096 01:24:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:27.096 01:24:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:27.096 01:24:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.096 01:24:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.096 01:24:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.096 01:24:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.096 01:24:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:27.096 01:24:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.096 01:24:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.096 01:24:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.096 01:24:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:27.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:16:27.096 00:16:27.096 --- 10.0.0.2 ping statistics --- 00:16:27.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.096 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:16:27.096 01:24:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:16:27.096 00:16:27.096 --- 10.0.0.1 ping statistics --- 00:16:27.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.096 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:16:27.096 01:24:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.096 01:24:18 -- nvmf/common.sh@410 -- # return 0 00:16:27.096 01:24:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:27.096 01:24:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.096 01:24:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:27.096 01:24:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:27.096 01:24:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.096 01:24:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:27.096 01:24:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:27.096 01:24:18 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:27.096 01:24:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:27.096 01:24:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:27.096 01:24:18 -- common/autotest_common.sh@10 -- # set +x 00:16:27.096 01:24:18 -- nvmf/common.sh@469 -- # nvmfpid=643202 00:16:27.096 01:24:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:27.096 01:24:18 -- nvmf/common.sh@470 -- # waitforlisten 643202 00:16:27.096 01:24:18 -- common/autotest_common.sh@819 -- # '[' -z 643202 ']' 00:16:27.096 01:24:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.096 01:24:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.096 01:24:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.096 01:24:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.096 01:24:18 -- common/autotest_common.sh@10 -- # set +x 00:16:27.096 [2024-07-27 01:24:18.702213] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:27.096 [2024-07-27 01:24:18.702306] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.096 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.096 [2024-07-27 01:24:18.773665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.354 [2024-07-27 01:24:18.893256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:27.354 [2024-07-27 01:24:18.893433] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.354 [2024-07-27 01:24:18.893453] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.354 [2024-07-27 01:24:18.893467] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.354 [2024-07-27 01:24:18.893562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:27.354 [2024-07-27 01:24:18.893637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:27.354 [2024-07-27 01:24:18.893695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:27.354 [2024-07-27 01:24:18.893698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.921 01:24:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:27.921 01:24:19 -- common/autotest_common.sh@852 -- # return 0 00:16:27.921 01:24:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:27.921 01:24:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:27.921 01:24:19 -- common/autotest_common.sh@10 -- # set +x 00:16:27.921 01:24:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.921 01:24:19 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.921 01:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.921 01:24:19 -- common/autotest_common.sh@10 -- # set +x 00:16:27.921 [2024-07-27 01:24:19.652461] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.921 01:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.921 01:24:19 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:27.921 01:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.921 01:24:19 -- common/autotest_common.sh@10 -- # set +x 00:16:28.179 Malloc0 00:16:28.179 01:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.179 01:24:19 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:28.179 01:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.179 01:24:19 -- common/autotest_common.sh@10 -- # set +x 00:16:28.179 01:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.179 01:24:19 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:28.179 01:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.179 01:24:19 -- common/autotest_common.sh@10 -- # set +x 00:16:28.179 01:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.179 01:24:19 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.179 01:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.179 01:24:19 -- common/autotest_common.sh@10 -- # set +x 00:16:28.179 [2024-07-27 01:24:19.706032] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.179 01:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.179 01:24:19 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:28.179 01:24:19 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:28.179 01:24:19 -- nvmf/common.sh@520 -- # config=() 00:16:28.179 01:24:19 -- nvmf/common.sh@520 -- # local subsystem config 00:16:28.179 01:24:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:28.179 01:24:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:28.179 { 00:16:28.179 "params": { 00:16:28.179 "name": "Nvme$subsystem", 00:16:28.179 "trtype": "$TEST_TRANSPORT", 00:16:28.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.179 "adrfam": "ipv4", 00:16:28.179 "trsvcid": "$NVMF_PORT", 00:16:28.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.179 "hdgst": ${hdgst:-false}, 00:16:28.179 "ddgst": ${ddgst:-false} 00:16:28.179 }, 00:16:28.179 "method": "bdev_nvme_attach_controller" 00:16:28.179 } 00:16:28.179 EOF 00:16:28.179 )") 00:16:28.179 01:24:19 -- nvmf/common.sh@542 -- # cat 00:16:28.179 01:24:19 -- nvmf/common.sh@544 -- # jq . 00:16:28.179 01:24:19 -- nvmf/common.sh@545 -- # IFS=, 00:16:28.179 01:24:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:28.179 "params": { 00:16:28.179 "name": "Nvme1", 00:16:28.179 "trtype": "tcp", 00:16:28.179 "traddr": "10.0.0.2", 00:16:28.179 "adrfam": "ipv4", 00:16:28.179 "trsvcid": "4420", 00:16:28.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.179 "hdgst": false, 00:16:28.179 "ddgst": false 00:16:28.179 }, 00:16:28.179 "method": "bdev_nvme_attach_controller" 00:16:28.179 }' 00:16:28.179 [2024-07-27 01:24:19.748783] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:28.180 [2024-07-27 01:24:19.748866] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643363 ] 00:16:28.180 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.180 [2024-07-27 01:24:19.809699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.180 [2024-07-27 01:24:19.922882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.180 [2024-07-27 01:24:19.922930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.180 [2024-07-27 01:24:19.922933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.438 [2024-07-27 01:24:20.148729] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:28.438 [2024-07-27 01:24:20.148786] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:28.438 I/O targets: 00:16:28.438 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:28.438 00:16:28.438 00:16:28.438 CUnit - A unit testing framework for C - Version 2.1-3 00:16:28.438 http://cunit.sourceforge.net/ 00:16:28.438 00:16:28.438 00:16:28.438 Suite: bdevio tests on: Nvme1n1 00:16:28.695 Test: blockdev write read block ...passed 00:16:28.695 Test: blockdev write zeroes read block ...passed 00:16:28.695 Test: blockdev write zeroes read no split ...passed 00:16:28.695 Test: blockdev write zeroes read split ...passed 00:16:28.695 Test: blockdev write zeroes read split partial ...passed 00:16:28.695 Test: blockdev reset ...[2024-07-27 01:24:20.364984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:28.695 [2024-07-27 01:24:20.365101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80b180 (9): Bad file descriptor 00:16:28.695 [2024-07-27 01:24:20.418397] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:28.695 passed 00:16:28.695 Test: blockdev write read 8 blocks ...passed 00:16:28.695 Test: blockdev write read size > 128k ...passed 00:16:28.695 Test: blockdev write read invalid size ...passed 00:16:28.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:28.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:28.952 Test: blockdev write read max offset ...passed 00:16:28.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:28.952 Test: blockdev writev readv 8 blocks ...passed 00:16:28.952 Test: blockdev writev readv 30 x 1block ...passed 00:16:28.952 Test: blockdev writev readv block ...passed 00:16:28.952 Test: blockdev writev readv size > 128k ...passed 00:16:28.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:28.952 Test: blockdev comparev and writev ...[2024-07-27 01:24:20.678889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.952 [2024-07-27 01:24:20.678927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:28.952 [2024-07-27 01:24:20.678951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.952 [2024-07-27 01:24:20.678969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.952 [2024-07-27 01:24:20.679354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.952 [2024-07-27 01:24:20.679380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:28.952 [2024-07-27 01:24:20.679402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.952 [2024-07-27 01:24:20.679418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:28.952 [2024-07-27 01:24:20.679809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.952 [2024-07-27 01:24:20.679833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:28.952 [2024-07-27 01:24:20.679855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.952 [2024-07-27 01:24:20.679880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:28.952 [2024-07-27 01:24:20.680261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.952 [2024-07-27 01:24:20.680285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:28.952 [2024-07-27 01:24:20.680306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.952 [2024-07-27 01:24:20.680322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:29.208 passed 00:16:29.208 Test: blockdev nvme passthru rw ...passed 00:16:29.208 Test: blockdev nvme passthru vendor specific ...[2024-07-27 01:24:20.764448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.208 [2024-07-27 01:24:20.764475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:29.208 [2024-07-27 01:24:20.764697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.208 [2024-07-27 01:24:20.764719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:29.208 [2024-07-27 01:24:20.764942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.208 [2024-07-27 01:24:20.764965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:29.208 [2024-07-27 01:24:20.765198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.208 [2024-07-27 01:24:20.765221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:29.208 passed 00:16:29.208 Test: blockdev nvme admin passthru ...passed 00:16:29.208 Test: blockdev copy ...passed 00:16:29.208 00:16:29.208 Run Summary: Type Total Ran Passed Failed Inactive 00:16:29.208 suites 1 1 n/a 0 0 00:16:29.208 tests 23 23 23 0 0 00:16:29.208 asserts 152 152 152 0 n/a 00:16:29.208 00:16:29.208 Elapsed time = 1.337 seconds 00:16:29.467 01:24:21 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.467 01:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.467 01:24:21 -- common/autotest_common.sh@10 -- # set +x 00:16:29.467 01:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.467 01:24:21 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:29.467 01:24:21 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:29.467 01:24:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:29.467 01:24:21 -- nvmf/common.sh@116 -- # sync 00:16:29.467 01:24:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:29.467 01:24:21 -- nvmf/common.sh@119 -- # set +e 00:16:29.467 01:24:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:29.467 01:24:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:29.467 rmmod nvme_tcp 00:16:29.467 rmmod nvme_fabrics 00:16:29.467 rmmod nvme_keyring 00:16:29.467 01:24:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:29.467 01:24:21 -- nvmf/common.sh@123 -- # set -e 00:16:29.467 01:24:21 -- nvmf/common.sh@124 -- # return 0 00:16:29.467 01:24:21 -- nvmf/common.sh@477 -- # '[' -n 643202 ']' 00:16:29.467 01:24:21 -- nvmf/common.sh@478 -- # killprocess 643202 00:16:29.467 01:24:21 -- common/autotest_common.sh@926 -- # '[' -z 643202 ']' 00:16:29.467 01:24:21 -- common/autotest_common.sh@930 -- # kill -0 643202 00:16:29.467 01:24:21 -- common/autotest_common.sh@931 -- # uname 00:16:29.467 01:24:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:29.467 01:24:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 643202 00:16:29.467 01:24:21 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:29.467 01:24:21 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:29.467 01:24:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 643202' 00:16:29.467 killing process with pid 643202 00:16:29.467 01:24:21 -- common/autotest_common.sh@945 -- # kill 643202 00:16:29.467 01:24:21 -- common/autotest_common.sh@950 -- # wait 643202 00:16:29.725 01:24:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:29.725 01:24:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:29.725 01:24:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:29.725 01:24:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.725 01:24:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:29.725 01:24:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.725 01:24:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.725 01:24:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.264 01:24:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:32.264 00:16:32.264 real 0m7.027s 00:16:32.264 user 0m13.221s 00:16:32.264 sys 0m2.098s 00:16:32.264 01:24:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.264 01:24:23 -- common/autotest_common.sh@10 -- # set +x 00:16:32.264 ************************************ 00:16:32.264 END TEST nvmf_bdevio 00:16:32.264 ************************************ 00:16:32.264 01:24:23 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:32.264 01:24:23 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:32.264 01:24:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:32.264 01:24:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:32.264 01:24:23 -- common/autotest_common.sh@10 -- # set +x 00:16:32.264 ************************************ 00:16:32.264 START TEST nvmf_bdevio_no_huge 00:16:32.264 ************************************ 00:16:32.264 01:24:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:32.264 * Looking for test storage... 00:16:32.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.264 01:24:23 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.264 01:24:23 -- nvmf/common.sh@7 -- # uname -s 00:16:32.264 01:24:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.264 01:24:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.264 01:24:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.264 01:24:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.264 01:24:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.264 01:24:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.264 01:24:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.264 01:24:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.264 01:24:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.264 01:24:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.264 01:24:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.264 01:24:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.264 01:24:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.264 01:24:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.264 01:24:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.264 01:24:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.264 01:24:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.265 01:24:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.265 01:24:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.265 01:24:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.265 01:24:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.265 01:24:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.265 01:24:23 -- paths/export.sh@5 -- # export PATH 00:16:32.265 01:24:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.265 01:24:23 -- nvmf/common.sh@46 -- # : 0 00:16:32.265 01:24:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:32.265 01:24:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:32.265 01:24:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:32.265 01:24:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.265 01:24:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.265 01:24:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:32.265 01:24:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:32.265 01:24:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:32.265 01:24:23 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.265 01:24:23 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.265 01:24:23 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:32.265 01:24:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:32.265 01:24:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.265 01:24:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:32.265 01:24:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:32.265 01:24:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:32.265 01:24:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.265 01:24:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.265 01:24:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.265 01:24:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:32.265 01:24:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:32.265 01:24:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:32.265 01:24:23 -- common/autotest_common.sh@10 -- # set +x 00:16:34.168 01:24:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:34.168 01:24:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:34.168 01:24:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:34.168 01:24:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:34.168 01:24:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:34.168 01:24:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:34.168 01:24:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:34.168 01:24:25 -- nvmf/common.sh@294 -- # net_devs=() 00:16:34.168 01:24:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:34.168 01:24:25 -- nvmf/common.sh@295 -- # e810=() 00:16:34.168 01:24:25 -- nvmf/common.sh@295 -- # local -ga e810 00:16:34.168 01:24:25 -- nvmf/common.sh@296 -- # x722=() 00:16:34.168 01:24:25 -- nvmf/common.sh@296 -- # local -ga x722 00:16:34.168 01:24:25 -- nvmf/common.sh@297 -- # mlx=() 00:16:34.168 01:24:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:34.168 01:24:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.168 01:24:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:34.168 01:24:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:34.168 01:24:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:34.168 01:24:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:34.168 01:24:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:34.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:34.168 01:24:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:34.168 01:24:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:34.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:34.168 01:24:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:34.168 01:24:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:34.168 01:24:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:34.168 01:24:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.168 01:24:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:34.168 01:24:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.168 01:24:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:34.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:34.168 01:24:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.169 01:24:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:34.169 01:24:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.169 01:24:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:34.169 01:24:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.169 01:24:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:34.169 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:34.169 01:24:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.169 01:24:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:34.169 01:24:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:34.169 01:24:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:34.169 01:24:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:34.169 01:24:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:34.169 01:24:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.169 01:24:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.169 01:24:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.169 01:24:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:34.169 01:24:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.169 01:24:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.169 01:24:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:34.169 01:24:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.169 01:24:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.169 01:24:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:34.169 01:24:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:34.169 01:24:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.169 01:24:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.169 01:24:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.169 01:24:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.169 01:24:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:34.169 01:24:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.169 01:24:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.169 01:24:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.169 01:24:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:34.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:16:34.169 00:16:34.169 --- 10.0.0.2 ping statistics --- 00:16:34.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.169 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:16:34.169 01:24:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:16:34.169 00:16:34.169 --- 10.0.0.1 ping statistics --- 00:16:34.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.169 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:34.169 01:24:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.169 01:24:25 -- nvmf/common.sh@410 -- # return 0 00:16:34.169 01:24:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:34.169 01:24:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.169 01:24:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:34.169 01:24:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:34.169 01:24:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.169 01:24:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:34.169 01:24:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:34.169 01:24:25 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:34.169 01:24:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:34.169 01:24:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:34.169 01:24:25 -- common/autotest_common.sh@10 -- # set +x 00:16:34.169 01:24:25 -- nvmf/common.sh@469 -- # nvmfpid=645449 00:16:34.169 01:24:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:34.169 01:24:25 -- nvmf/common.sh@470 -- # waitforlisten 645449 00:16:34.169 01:24:25 -- common/autotest_common.sh@819 -- # '[' -z 645449 ']' 00:16:34.169 01:24:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.169 01:24:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:34.169 01:24:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.169 01:24:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:34.169 01:24:25 -- common/autotest_common.sh@10 -- # set +x 00:16:34.169 [2024-07-27 01:24:25.778128] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:34.169 [2024-07-27 01:24:25.778200] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:34.169 [2024-07-27 01:24:25.852023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.428 [2024-07-27 01:24:25.970634] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:34.428 [2024-07-27 01:24:25.970818] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.428 [2024-07-27 01:24:25.970837] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.428 [2024-07-27 01:24:25.970852] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.428 [2024-07-27 01:24:25.970990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:34.428 [2024-07-27 01:24:25.971149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.428 [2024-07-27 01:24:25.971146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:34.428 [2024-07-27 01:24:25.971112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:34.997 01:24:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:34.997 01:24:26 -- common/autotest_common.sh@852 -- # return 0 00:16:34.997 01:24:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:34.997 01:24:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:34.997 01:24:26 -- common/autotest_common.sh@10 -- # set +x 00:16:35.275 01:24:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.275 01:24:26 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.275 01:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.275 01:24:26 -- common/autotest_common.sh@10 -- # set +x 00:16:35.275 [2024-07-27 01:24:26.778106] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.275 01:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.275 01:24:26 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.275 01:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.275 01:24:26 -- common/autotest_common.sh@10 -- # set +x 00:16:35.275 Malloc0 00:16:35.275 01:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.275 01:24:26 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:35.275 01:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.275 01:24:26 -- common/autotest_common.sh@10 -- # set +x 00:16:35.275 01:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.275 01:24:26 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:35.275 01:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.275 01:24:26 -- common/autotest_common.sh@10 -- # set +x 00:16:35.275 01:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.275 01:24:26 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.275 01:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.275 01:24:26 -- common/autotest_common.sh@10 -- # set +x 00:16:35.275 [2024-07-27 01:24:26.816117] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.275 01:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.275 01:24:26 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:35.275 01:24:26 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:35.275 01:24:26 -- nvmf/common.sh@520 -- # config=() 00:16:35.275 01:24:26 -- nvmf/common.sh@520 -- # local subsystem config 00:16:35.275 01:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:35.275 01:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:35.275 { 00:16:35.275 "params": { 00:16:35.275 "name": "Nvme$subsystem", 00:16:35.275 "trtype": "$TEST_TRANSPORT", 00:16:35.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.275 "adrfam": "ipv4", 00:16:35.275 "trsvcid": "$NVMF_PORT", 00:16:35.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.276 "hdgst": ${hdgst:-false}, 00:16:35.276 "ddgst": ${ddgst:-false} 00:16:35.276 }, 00:16:35.276 "method": "bdev_nvme_attach_controller" 00:16:35.276 } 00:16:35.276 EOF 00:16:35.276 )") 00:16:35.276 01:24:26 -- nvmf/common.sh@542 -- # cat 00:16:35.276 01:24:26 -- nvmf/common.sh@544 -- # jq . 00:16:35.276 01:24:26 -- nvmf/common.sh@545 -- # IFS=, 00:16:35.276 01:24:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:35.276 "params": { 00:16:35.276 "name": "Nvme1", 00:16:35.276 "trtype": "tcp", 00:16:35.276 "traddr": "10.0.0.2", 00:16:35.276 "adrfam": "ipv4", 00:16:35.276 "trsvcid": "4420", 00:16:35.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.276 "hdgst": false, 00:16:35.276 "ddgst": false 00:16:35.276 }, 00:16:35.276 "method": "bdev_nvme_attach_controller" 00:16:35.276 }' 00:16:35.276 [2024-07-27 01:24:26.854686] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:35.276 [2024-07-27 01:24:26.854764] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid645617 ] 00:16:35.276 [2024-07-27 01:24:26.918020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.548 [2024-07-27 01:24:27.026977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.548 [2024-07-27 01:24:27.027019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.548 [2024-07-27 01:24:27.027022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.548 [2024-07-27 01:24:27.300306] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:35.548 [2024-07-27 01:24:27.300356] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:35.548 I/O targets: 00:16:35.548 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:35.548 00:16:35.548 00:16:35.548 CUnit - A unit testing framework for C - Version 2.1-3 00:16:35.548 http://cunit.sourceforge.net/ 00:16:35.548 00:16:35.548 00:16:35.548 Suite: bdevio tests on: Nvme1n1 00:16:35.808 Test: blockdev write read block ...passed 00:16:35.808 Test: blockdev write zeroes read block ...passed 00:16:35.808 Test: blockdev write zeroes read no split ...passed 00:16:35.808 Test: blockdev write zeroes read split ...passed 00:16:35.808 Test: blockdev write zeroes read split partial ...passed 00:16:35.808 Test: blockdev reset ...[2024-07-27 01:24:27.524565] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:35.808 [2024-07-27 01:24:27.524682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70eb00 (9): Bad file descriptor 00:16:35.808 [2024-07-27 01:24:27.540956] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:35.808 passed 00:16:36.067 Test: blockdev write read 8 blocks ...passed 00:16:36.067 Test: blockdev write read size > 128k ...passed 00:16:36.067 Test: blockdev write read invalid size ...passed 00:16:36.067 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.067 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.067 Test: blockdev write read max offset ...passed 00:16:36.067 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.067 Test: blockdev writev readv 8 blocks ...passed 00:16:36.067 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.067 Test: blockdev writev readv block ...passed 00:16:36.067 Test: blockdev writev readv size > 128k ...passed 00:16:36.067 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.067 Test: blockdev comparev and writev ...[2024-07-27 01:24:27.795699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.067 [2024-07-27 01:24:27.795736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:36.067 [2024-07-27 01:24:27.795760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.067 [2024-07-27 01:24:27.795777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:36.067 [2024-07-27 01:24:27.796173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.067 [2024-07-27 01:24:27.796198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:36.067 [2024-07-27 01:24:27.796219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.067 [2024-07-27 01:24:27.796236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:36.067 [2024-07-27 01:24:27.796636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.067 [2024-07-27 01:24:27.796660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:36.067 [2024-07-27 01:24:27.796682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.067 [2024-07-27 01:24:27.796698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:36.067 [2024-07-27 01:24:27.797110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.067 [2024-07-27 01:24:27.797134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:36.067 [2024-07-27 01:24:27.797165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.067 [2024-07-27 01:24:27.797182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:36.327 passed 00:16:36.327 Test: blockdev nvme passthru rw ...passed 00:16:36.327 Test: blockdev nvme passthru vendor specific ...[2024-07-27 01:24:27.879431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.327 [2024-07-27 01:24:27.879458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:36.327 [2024-07-27 01:24:27.879653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.327 [2024-07-27 01:24:27.879675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:36.327 [2024-07-27 01:24:27.879871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.327 [2024-07-27 01:24:27.879892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:36.327 [2024-07-27 01:24:27.880089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.327 [2024-07-27 01:24:27.880112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:36.327 passed 00:16:36.327 Test: blockdev nvme admin passthru ...passed 00:16:36.327 Test: blockdev copy ...passed 00:16:36.327 00:16:36.327 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.327 suites 1 1 n/a 0 0 00:16:36.327 tests 23 23 23 0 0 00:16:36.327 asserts 152 152 152 0 n/a 00:16:36.327 00:16:36.327 Elapsed time = 1.271 seconds 00:16:36.586 01:24:28 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.586 01:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.586 01:24:28 -- common/autotest_common.sh@10 -- # set +x 00:16:36.586 01:24:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.586 01:24:28 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:36.586 01:24:28 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:36.586 01:24:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:36.586 01:24:28 -- nvmf/common.sh@116 -- # sync 00:16:36.586 01:24:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:36.586 01:24:28 -- nvmf/common.sh@119 -- # set +e 00:16:36.586 01:24:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:36.586 01:24:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:36.586 rmmod nvme_tcp 00:16:36.586 rmmod nvme_fabrics 00:16:36.844 rmmod nvme_keyring 00:16:36.844 01:24:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:36.844 01:24:28 -- nvmf/common.sh@123 -- # set -e 00:16:36.844 01:24:28 -- nvmf/common.sh@124 -- # return 0 00:16:36.844 01:24:28 -- nvmf/common.sh@477 -- # '[' -n 645449 ']' 00:16:36.844 01:24:28 -- nvmf/common.sh@478 -- # killprocess 645449 00:16:36.844 01:24:28 -- common/autotest_common.sh@926 -- # '[' -z 645449 ']' 00:16:36.844 01:24:28 -- common/autotest_common.sh@930 -- # kill -0 645449 00:16:36.844 01:24:28 -- common/autotest_common.sh@931 -- # uname 00:16:36.844 01:24:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:36.844 01:24:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 645449 00:16:36.844 01:24:28 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:36.844 01:24:28 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:36.844 01:24:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 645449' 00:16:36.844 killing process with pid 645449 00:16:36.844 01:24:28 -- common/autotest_common.sh@945 -- # kill 645449 00:16:36.844 01:24:28 -- common/autotest_common.sh@950 -- # wait 645449 00:16:37.103 01:24:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:37.103 01:24:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:37.103 01:24:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:37.103 01:24:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.103 01:24:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:37.103 01:24:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.103 01:24:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.103 01:24:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.639 01:24:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:39.639 00:16:39.639 real 0m7.351s 00:16:39.639 user 0m14.356s 00:16:39.639 sys 0m2.560s 00:16:39.639 01:24:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.639 01:24:30 -- common/autotest_common.sh@10 -- # set +x 00:16:39.639 ************************************ 00:16:39.639 END TEST nvmf_bdevio_no_huge 00:16:39.639 ************************************ 00:16:39.639 01:24:30 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:39.639 01:24:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:39.639 01:24:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.639 01:24:30 -- common/autotest_common.sh@10 -- # set +x 00:16:39.639 ************************************ 00:16:39.639 START TEST nvmf_tls 00:16:39.639 ************************************ 00:16:39.639 01:24:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:39.639 * Looking for test storage... 00:16:39.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.639 01:24:30 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.639 01:24:30 -- nvmf/common.sh@7 -- # uname -s 00:16:39.639 01:24:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.639 01:24:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.639 01:24:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.639 01:24:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.639 01:24:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.639 01:24:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.639 01:24:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.639 01:24:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.639 01:24:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.639 01:24:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.639 01:24:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.639 01:24:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.639 01:24:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.639 01:24:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.639 01:24:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.639 01:24:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.639 01:24:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.639 01:24:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.639 01:24:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.639 01:24:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.639 01:24:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.639 01:24:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.639 01:24:30 -- paths/export.sh@5 -- # export PATH 00:16:39.639 01:24:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.639 01:24:30 -- nvmf/common.sh@46 -- # : 0 00:16:39.639 01:24:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.639 01:24:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.639 01:24:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.639 01:24:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.639 01:24:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.639 01:24:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.639 01:24:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.639 01:24:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.639 01:24:30 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.639 01:24:30 -- target/tls.sh@71 -- # nvmftestinit 00:16:39.639 01:24:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:39.639 01:24:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.639 01:24:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.639 01:24:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.639 01:24:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.639 01:24:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.639 01:24:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.639 01:24:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.639 01:24:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:39.639 01:24:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:39.639 01:24:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:39.639 01:24:30 -- common/autotest_common.sh@10 -- # set +x 00:16:41.542 01:24:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:41.542 01:24:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:41.542 01:24:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:41.542 01:24:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:41.542 01:24:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:41.542 01:24:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:41.542 01:24:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:41.542 01:24:32 -- nvmf/common.sh@294 -- # net_devs=() 00:16:41.542 01:24:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:41.542 01:24:32 -- nvmf/common.sh@295 -- # e810=() 00:16:41.542 01:24:32 -- nvmf/common.sh@295 -- # local -ga e810 00:16:41.542 01:24:32 -- nvmf/common.sh@296 -- # x722=() 00:16:41.542 01:24:32 -- nvmf/common.sh@296 -- # local -ga x722 00:16:41.542 01:24:32 -- nvmf/common.sh@297 -- # mlx=() 00:16:41.542 01:24:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:41.542 01:24:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.542 01:24:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:41.542 01:24:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:41.542 01:24:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:41.542 01:24:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:41.542 01:24:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:41.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:41.542 01:24:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:41.542 01:24:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:41.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:41.542 01:24:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:41.542 01:24:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:41.542 01:24:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.542 01:24:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:41.542 01:24:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.542 01:24:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:41.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:41.542 01:24:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.542 01:24:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:41.542 01:24:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.542 01:24:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:41.542 01:24:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.542 01:24:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:41.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:41.542 01:24:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.542 01:24:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:41.542 01:24:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:41.542 01:24:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:41.542 01:24:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:41.542 01:24:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.542 01:24:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.542 01:24:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.542 01:24:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:41.542 01:24:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.542 01:24:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.542 01:24:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:41.542 01:24:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.542 01:24:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.542 01:24:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:41.542 01:24:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:41.542 01:24:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.542 01:24:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.542 01:24:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.542 01:24:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.542 01:24:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:41.542 01:24:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.542 01:24:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.542 01:24:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.542 01:24:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:41.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:16:41.543 00:16:41.543 --- 10.0.0.2 ping statistics --- 00:16:41.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.543 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:41.543 01:24:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:16:41.543 00:16:41.543 --- 10.0.0.1 ping statistics --- 00:16:41.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.543 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:16:41.543 01:24:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.543 01:24:33 -- nvmf/common.sh@410 -- # return 0 00:16:41.543 01:24:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:41.543 01:24:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.543 01:24:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:41.543 01:24:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:41.543 01:24:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.543 01:24:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:41.543 01:24:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:41.543 01:24:33 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:41.543 01:24:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:41.543 01:24:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:41.543 01:24:33 -- common/autotest_common.sh@10 -- # set +x 00:16:41.543 01:24:33 -- nvmf/common.sh@469 -- # nvmfpid=647824 00:16:41.543 01:24:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:41.543 01:24:33 -- nvmf/common.sh@470 -- # waitforlisten 647824 00:16:41.543 01:24:33 -- common/autotest_common.sh@819 -- # '[' -z 647824 ']' 00:16:41.543 01:24:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.543 01:24:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:41.543 01:24:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.543 01:24:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:41.543 01:24:33 -- common/autotest_common.sh@10 -- # set +x 00:16:41.543 [2024-07-27 01:24:33.141881] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:41.543 [2024-07-27 01:24:33.141953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.543 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.543 [2024-07-27 01:24:33.204211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.800 [2024-07-27 01:24:33.306121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:41.800 [2024-07-27 01:24:33.306274] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.800 [2024-07-27 01:24:33.306291] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.800 [2024-07-27 01:24:33.306303] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.800 [2024-07-27 01:24:33.306329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.800 01:24:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:41.800 01:24:33 -- common/autotest_common.sh@852 -- # return 0 00:16:41.800 01:24:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:41.800 01:24:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:41.800 01:24:33 -- common/autotest_common.sh@10 -- # set +x 00:16:41.801 01:24:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.801 01:24:33 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:41.801 01:24:33 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:42.059 true 00:16:42.059 01:24:33 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:42.059 01:24:33 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:42.318 01:24:33 -- target/tls.sh@82 -- # version=0 00:16:42.318 01:24:33 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:42.318 01:24:33 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:42.576 01:24:34 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:42.576 01:24:34 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:42.576 01:24:34 -- target/tls.sh@90 -- # version=13 00:16:42.576 01:24:34 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:42.576 01:24:34 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:42.834 01:24:34 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:42.834 01:24:34 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:43.091 01:24:34 -- target/tls.sh@98 -- # version=7 00:16:43.091 01:24:34 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:43.091 01:24:34 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:43.091 01:24:34 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:43.349 01:24:35 -- target/tls.sh@105 -- # ktls=false 00:16:43.349 01:24:35 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:43.349 01:24:35 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:43.609 01:24:35 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:43.609 01:24:35 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:43.867 01:24:35 -- target/tls.sh@113 -- # ktls=true 00:16:43.867 01:24:35 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:43.867 01:24:35 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:44.126 01:24:35 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:44.126 01:24:35 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:44.386 01:24:36 -- target/tls.sh@121 -- # ktls=false 00:16:44.386 01:24:36 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:44.386 01:24:36 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:44.386 01:24:36 -- target/tls.sh@49 -- # local key hash crc 00:16:44.386 01:24:36 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:44.386 01:24:36 -- target/tls.sh@51 -- # hash=01 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # gzip -1 -c 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # tail -c8 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # head -c 4 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # crc='p$H�' 00:16:44.386 01:24:36 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:44.386 01:24:36 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:44.386 01:24:36 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:44.386 01:24:36 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:44.386 01:24:36 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:44.386 01:24:36 -- target/tls.sh@49 -- # local key hash crc 00:16:44.386 01:24:36 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:44.386 01:24:36 -- target/tls.sh@51 -- # hash=01 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # gzip -1 -c 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # tail -c8 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # head -c 4 00:16:44.386 01:24:36 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:44.386 01:24:36 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:44.386 01:24:36 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:44.386 01:24:36 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:44.386 01:24:36 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:44.386 01:24:36 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:44.386 01:24:36 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:44.386 01:24:36 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:44.386 01:24:36 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:44.386 01:24:36 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:44.386 01:24:36 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:44.386 01:24:36 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:44.644 01:24:36 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:44.903 01:24:36 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:44.903 01:24:36 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:44.903 01:24:36 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:45.161 [2024-07-27 01:24:36.845637] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.161 01:24:36 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:45.419 01:24:37 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:45.677 [2024-07-27 01:24:37.318910] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:45.677 [2024-07-27 01:24:37.319198] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.677 01:24:37 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:45.935 malloc0 00:16:45.935 01:24:37 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:46.194 01:24:37 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:46.453 01:24:38 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:46.453 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.434 Initializing NVMe Controllers 00:16:56.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:56.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:56.434 Initialization complete. Launching workers. 00:16:56.434 ======================================================== 00:16:56.435 Latency(us) 00:16:56.435 Device Information : IOPS MiB/s Average min max 00:16:56.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7719.48 30.15 8293.31 1206.98 9464.34 00:16:56.435 ======================================================== 00:16:56.435 Total : 7719.48 30.15 8293.31 1206.98 9464.34 00:16:56.435 00:16:56.435 01:24:48 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:56.435 01:24:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:56.435 01:24:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:56.435 01:24:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:56.435 01:24:48 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:16:56.435 01:24:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:56.435 01:24:48 -- target/tls.sh@28 -- # bdevperf_pid=649658 00:16:56.435 01:24:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:56.435 01:24:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:56.435 01:24:48 -- target/tls.sh@31 -- # waitforlisten 649658 /var/tmp/bdevperf.sock 00:16:56.435 01:24:48 -- common/autotest_common.sh@819 -- # '[' -z 649658 ']' 00:16:56.435 01:24:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.435 01:24:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.435 01:24:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.435 01:24:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.435 01:24:48 -- common/autotest_common.sh@10 -- # set +x 00:16:56.694 [2024-07-27 01:24:48.215287] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:56.694 [2024-07-27 01:24:48.215374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid649658 ] 00:16:56.694 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.694 [2024-07-27 01:24:48.275638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.694 [2024-07-27 01:24:48.386232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.632 01:24:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.632 01:24:49 -- common/autotest_common.sh@852 -- # return 0 00:16:57.632 01:24:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:57.924 [2024-07-27 01:24:49.454671] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:57.924 TLSTESTn1 00:16:57.924 01:24:49 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:57.924 Running I/O for 10 seconds... 00:17:10.131 00:17:10.131 Latency(us) 00:17:10.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.132 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:10.132 Verification LBA range: start 0x0 length 0x2000 00:17:10.132 TLSTESTn1 : 10.03 2202.28 8.60 0.00 0.00 58039.91 9951.76 65633.09 00:17:10.132 =================================================================================================================== 00:17:10.132 Total : 2202.28 8.60 0.00 0.00 58039.91 9951.76 65633.09 00:17:10.132 0 00:17:10.132 01:24:59 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.132 01:24:59 -- target/tls.sh@45 -- # killprocess 649658 00:17:10.132 01:24:59 -- common/autotest_common.sh@926 -- # '[' -z 649658 ']' 00:17:10.132 01:24:59 -- common/autotest_common.sh@930 -- # kill -0 649658 00:17:10.132 01:24:59 -- common/autotest_common.sh@931 -- # uname 00:17:10.132 01:24:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:10.132 01:24:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 649658 00:17:10.132 01:24:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:10.132 01:24:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:10.132 01:24:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 649658' 00:17:10.132 killing process with pid 649658 00:17:10.132 01:24:59 -- common/autotest_common.sh@945 -- # kill 649658 00:17:10.132 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.132 00:17:10.132 Latency(us) 00:17:10.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.132 =================================================================================================================== 00:17:10.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.132 01:24:59 -- common/autotest_common.sh@950 -- # wait 649658 00:17:10.132 01:25:00 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:10.132 01:25:00 -- common/autotest_common.sh@640 -- # local es=0 00:17:10.132 01:25:00 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:10.132 01:25:00 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:10.132 01:25:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.132 01:25:00 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:10.132 01:25:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.132 01:25:00 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:10.132 01:25:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.132 01:25:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:10.132 01:25:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.132 01:25:00 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:17:10.132 01:25:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.132 01:25:00 -- target/tls.sh@28 -- # bdevperf_pid=651029 00:17:10.132 01:25:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.132 01:25:00 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.132 01:25:00 -- target/tls.sh@31 -- # waitforlisten 651029 /var/tmp/bdevperf.sock 00:17:10.132 01:25:00 -- common/autotest_common.sh@819 -- # '[' -z 651029 ']' 00:17:10.132 01:25:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.132 01:25:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:10.132 01:25:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.132 01:25:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:10.132 01:25:00 -- common/autotest_common.sh@10 -- # set +x 00:17:10.132 [2024-07-27 01:25:00.069428] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:10.132 [2024-07-27 01:25:00.069512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651029 ] 00:17:10.132 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.132 [2024-07-27 01:25:00.136179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.132 [2024-07-27 01:25:00.248271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.132 01:25:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.132 01:25:01 -- common/autotest_common.sh@852 -- # return 0 00:17:10.132 01:25:01 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:10.132 [2024-07-27 01:25:01.248433] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.132 [2024-07-27 01:25:01.254543] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:10.132 [2024-07-27 01:25:01.255600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1833870 (107): Transport endpoint is not connected 00:17:10.132 [2024-07-27 01:25:01.256591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1833870 (9): Bad file descriptor 00:17:10.132 [2024-07-27 01:25:01.257591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:10.132 [2024-07-27 01:25:01.257611] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:10.132 [2024-07-27 01:25:01.257638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:10.132 request: 00:17:10.132 { 00:17:10.132 "name": "TLSTEST", 00:17:10.132 "trtype": "tcp", 00:17:10.132 "traddr": "10.0.0.2", 00:17:10.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.132 "adrfam": "ipv4", 00:17:10.132 "trsvcid": "4420", 00:17:10.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.132 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:17:10.132 "method": "bdev_nvme_attach_controller", 00:17:10.132 "req_id": 1 00:17:10.132 } 00:17:10.132 Got JSON-RPC error response 00:17:10.132 response: 00:17:10.132 { 00:17:10.132 "code": -32602, 00:17:10.132 "message": "Invalid parameters" 00:17:10.132 } 00:17:10.132 01:25:01 -- target/tls.sh@36 -- # killprocess 651029 00:17:10.132 01:25:01 -- common/autotest_common.sh@926 -- # '[' -z 651029 ']' 00:17:10.132 01:25:01 -- common/autotest_common.sh@930 -- # kill -0 651029 00:17:10.132 01:25:01 -- common/autotest_common.sh@931 -- # uname 00:17:10.132 01:25:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:10.132 01:25:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 651029 00:17:10.132 01:25:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:10.132 01:25:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:10.132 01:25:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 651029' 00:17:10.132 killing process with pid 651029 00:17:10.132 01:25:01 -- common/autotest_common.sh@945 -- # kill 651029 00:17:10.132 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.132 00:17:10.132 Latency(us) 00:17:10.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.132 =================================================================================================================== 00:17:10.132 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:10.132 01:25:01 -- common/autotest_common.sh@950 -- # wait 651029 00:17:10.132 01:25:01 -- target/tls.sh@37 -- # return 1 00:17:10.132 01:25:01 -- common/autotest_common.sh@643 -- # es=1 00:17:10.132 01:25:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:10.132 01:25:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:10.132 01:25:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:10.132 01:25:01 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:10.132 01:25:01 -- common/autotest_common.sh@640 -- # local es=0 00:17:10.132 01:25:01 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:10.132 01:25:01 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:10.132 01:25:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.132 01:25:01 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:10.132 01:25:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.132 01:25:01 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:10.132 01:25:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.132 01:25:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:10.132 01:25:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:10.132 01:25:01 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:10.132 01:25:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.132 01:25:01 -- target/tls.sh@28 -- # bdevperf_pid=651299 00:17:10.132 01:25:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.132 01:25:01 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.132 01:25:01 -- target/tls.sh@31 -- # waitforlisten 651299 /var/tmp/bdevperf.sock 00:17:10.132 01:25:01 -- common/autotest_common.sh@819 -- # '[' -z 651299 ']' 00:17:10.132 01:25:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.132 01:25:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:10.132 01:25:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.133 01:25:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:10.133 01:25:01 -- common/autotest_common.sh@10 -- # set +x 00:17:10.133 [2024-07-27 01:25:01.604368] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:10.133 [2024-07-27 01:25:01.604451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651299 ] 00:17:10.133 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.133 [2024-07-27 01:25:01.660918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.133 [2024-07-27 01:25:01.761194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.071 01:25:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:11.071 01:25:02 -- common/autotest_common.sh@852 -- # return 0 00:17:11.071 01:25:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:11.071 [2024-07-27 01:25:02.750942] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.071 [2024-07-27 01:25:02.758177] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:11.071 [2024-07-27 01:25:02.758208] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:11.071 [2024-07-27 01:25:02.758247] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:11.071 [2024-07-27 01:25:02.758859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2499870 (107): Transport endpoint is not connected 00:17:11.071 [2024-07-27 01:25:02.759848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2499870 (9): Bad file descriptor 00:17:11.071 [2024-07-27 01:25:02.760847] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:11.071 [2024-07-27 01:25:02.760867] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:11.071 [2024-07-27 01:25:02.760881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:11.071 request: 00:17:11.071 { 00:17:11.071 "name": "TLSTEST", 00:17:11.071 "trtype": "tcp", 00:17:11.071 "traddr": "10.0.0.2", 00:17:11.071 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:11.071 "adrfam": "ipv4", 00:17:11.071 "trsvcid": "4420", 00:17:11.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.071 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:17:11.071 "method": "bdev_nvme_attach_controller", 00:17:11.071 "req_id": 1 00:17:11.071 } 00:17:11.071 Got JSON-RPC error response 00:17:11.071 response: 00:17:11.071 { 00:17:11.071 "code": -32602, 00:17:11.071 "message": "Invalid parameters" 00:17:11.071 } 00:17:11.071 01:25:02 -- target/tls.sh@36 -- # killprocess 651299 00:17:11.071 01:25:02 -- common/autotest_common.sh@926 -- # '[' -z 651299 ']' 00:17:11.071 01:25:02 -- common/autotest_common.sh@930 -- # kill -0 651299 00:17:11.071 01:25:02 -- common/autotest_common.sh@931 -- # uname 00:17:11.071 01:25:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:11.071 01:25:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 651299 00:17:11.071 01:25:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:11.071 01:25:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:11.071 01:25:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 651299' 00:17:11.071 killing process with pid 651299 00:17:11.071 01:25:02 -- common/autotest_common.sh@945 -- # kill 651299 00:17:11.071 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.071 00:17:11.071 Latency(us) 00:17:11.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.071 =================================================================================================================== 00:17:11.071 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.071 01:25:02 -- common/autotest_common.sh@950 -- # wait 651299 00:17:11.329 01:25:03 -- target/tls.sh@37 -- # return 1 00:17:11.329 01:25:03 -- common/autotest_common.sh@643 -- # es=1 00:17:11.329 01:25:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:11.329 01:25:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:11.329 01:25:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:11.329 01:25:03 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:11.329 01:25:03 -- common/autotest_common.sh@640 -- # local es=0 00:17:11.329 01:25:03 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:11.329 01:25:03 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:11.329 01:25:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.330 01:25:03 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:11.330 01:25:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.330 01:25:03 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:11.330 01:25:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:11.330 01:25:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:11.330 01:25:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:11.330 01:25:03 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:11.330 01:25:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.330 01:25:03 -- target/tls.sh@28 -- # bdevperf_pid=651447 00:17:11.330 01:25:03 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.330 01:25:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.330 01:25:03 -- target/tls.sh@31 -- # waitforlisten 651447 /var/tmp/bdevperf.sock 00:17:11.330 01:25:03 -- common/autotest_common.sh@819 -- # '[' -z 651447 ']' 00:17:11.330 01:25:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.330 01:25:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:11.330 01:25:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.330 01:25:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:11.330 01:25:03 -- common/autotest_common.sh@10 -- # set +x 00:17:11.590 [2024-07-27 01:25:03.112040] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:11.590 [2024-07-27 01:25:03.112141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651447 ] 00:17:11.590 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.590 [2024-07-27 01:25:03.171069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.590 [2024-07-27 01:25:03.276305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.525 01:25:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:12.525 01:25:04 -- common/autotest_common.sh@852 -- # return 0 00:17:12.525 01:25:04 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:12.525 [2024-07-27 01:25:04.250659] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.525 [2024-07-27 01:25:04.262087] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:12.525 [2024-07-27 01:25:04.262119] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:12.525 [2024-07-27 01:25:04.262158] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:12.525 [2024-07-27 01:25:04.262805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb48870 (107): Transport endpoint is not connected 00:17:12.525 [2024-07-27 01:25:04.263796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb48870 (9): Bad file descriptor 00:17:12.525 [2024-07-27 01:25:04.264795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:12.525 [2024-07-27 01:25:04.264813] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:12.525 [2024-07-27 01:25:04.264836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:12.525 request: 00:17:12.525 { 00:17:12.525 "name": "TLSTEST", 00:17:12.525 "trtype": "tcp", 00:17:12.525 "traddr": "10.0.0.2", 00:17:12.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:12.525 "adrfam": "ipv4", 00:17:12.525 "trsvcid": "4420", 00:17:12.525 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:12.525 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:17:12.525 "method": "bdev_nvme_attach_controller", 00:17:12.525 "req_id": 1 00:17:12.525 } 00:17:12.525 Got JSON-RPC error response 00:17:12.525 response: 00:17:12.525 { 00:17:12.525 "code": -32602, 00:17:12.525 "message": "Invalid parameters" 00:17:12.525 } 00:17:12.525 01:25:04 -- target/tls.sh@36 -- # killprocess 651447 00:17:12.525 01:25:04 -- common/autotest_common.sh@926 -- # '[' -z 651447 ']' 00:17:12.784 01:25:04 -- common/autotest_common.sh@930 -- # kill -0 651447 00:17:12.784 01:25:04 -- common/autotest_common.sh@931 -- # uname 00:17:12.784 01:25:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.784 01:25:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 651447 00:17:12.784 01:25:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:12.784 01:25:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:12.784 01:25:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 651447' 00:17:12.784 killing process with pid 651447 00:17:12.784 01:25:04 -- common/autotest_common.sh@945 -- # kill 651447 00:17:12.784 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.784 00:17:12.784 Latency(us) 00:17:12.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.784 =================================================================================================================== 00:17:12.784 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:12.784 01:25:04 -- common/autotest_common.sh@950 -- # wait 651447 00:17:13.042 01:25:04 -- target/tls.sh@37 -- # return 1 00:17:13.042 01:25:04 -- common/autotest_common.sh@643 -- # es=1 00:17:13.042 01:25:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:13.042 01:25:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:13.042 01:25:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:13.042 01:25:04 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.042 01:25:04 -- common/autotest_common.sh@640 -- # local es=0 00:17:13.042 01:25:04 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.042 01:25:04 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:13.042 01:25:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:13.042 01:25:04 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:13.042 01:25:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:13.042 01:25:04 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.042 01:25:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.042 01:25:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.042 01:25:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.042 01:25:04 -- target/tls.sh@23 -- # psk= 00:17:13.042 01:25:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.042 01:25:04 -- target/tls.sh@28 -- # bdevperf_pid=651626 00:17:13.042 01:25:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.042 01:25:04 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.042 01:25:04 -- target/tls.sh@31 -- # waitforlisten 651626 /var/tmp/bdevperf.sock 00:17:13.042 01:25:04 -- common/autotest_common.sh@819 -- # '[' -z 651626 ']' 00:17:13.042 01:25:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.042 01:25:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.042 01:25:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.042 01:25:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.042 01:25:04 -- common/autotest_common.sh@10 -- # set +x 00:17:13.042 [2024-07-27 01:25:04.603237] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:13.042 [2024-07-27 01:25:04.603328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651626 ] 00:17:13.042 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.042 [2024-07-27 01:25:04.663413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.042 [2024-07-27 01:25:04.766087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.978 01:25:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.978 01:25:05 -- common/autotest_common.sh@852 -- # return 0 00:17:13.978 01:25:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:14.237 [2024-07-27 01:25:05.766116] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:14.237 [2024-07-27 01:25:05.768003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d6330 (9): Bad file descriptor 00:17:14.237 [2024-07-27 01:25:05.768999] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:14.237 [2024-07-27 01:25:05.769024] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:14.237 [2024-07-27 01:25:05.769053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:14.237 request: 00:17:14.237 { 00:17:14.237 "name": "TLSTEST", 00:17:14.237 "trtype": "tcp", 00:17:14.237 "traddr": "10.0.0.2", 00:17:14.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.237 "adrfam": "ipv4", 00:17:14.237 "trsvcid": "4420", 00:17:14.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.237 "method": "bdev_nvme_attach_controller", 00:17:14.237 "req_id": 1 00:17:14.237 } 00:17:14.237 Got JSON-RPC error response 00:17:14.237 response: 00:17:14.237 { 00:17:14.237 "code": -32602, 00:17:14.237 "message": "Invalid parameters" 00:17:14.237 } 00:17:14.237 01:25:05 -- target/tls.sh@36 -- # killprocess 651626 00:17:14.238 01:25:05 -- common/autotest_common.sh@926 -- # '[' -z 651626 ']' 00:17:14.238 01:25:05 -- common/autotest_common.sh@930 -- # kill -0 651626 00:17:14.238 01:25:05 -- common/autotest_common.sh@931 -- # uname 00:17:14.238 01:25:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:14.238 01:25:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 651626 00:17:14.238 01:25:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:14.238 01:25:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:14.238 01:25:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 651626' 00:17:14.238 killing process with pid 651626 00:17:14.238 01:25:05 -- common/autotest_common.sh@945 -- # kill 651626 00:17:14.238 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.238 00:17:14.238 Latency(us) 00:17:14.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.238 =================================================================================================================== 00:17:14.238 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.238 01:25:05 -- common/autotest_common.sh@950 -- # wait 651626 00:17:14.498 01:25:06 -- target/tls.sh@37 -- # return 1 00:17:14.498 01:25:06 -- common/autotest_common.sh@643 -- # es=1 00:17:14.498 01:25:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:14.498 01:25:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:14.498 01:25:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:14.498 01:25:06 -- target/tls.sh@167 -- # killprocess 647824 00:17:14.498 01:25:06 -- common/autotest_common.sh@926 -- # '[' -z 647824 ']' 00:17:14.498 01:25:06 -- common/autotest_common.sh@930 -- # kill -0 647824 00:17:14.498 01:25:06 -- common/autotest_common.sh@931 -- # uname 00:17:14.498 01:25:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:14.498 01:25:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 647824 00:17:14.498 01:25:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:14.498 01:25:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:14.498 01:25:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 647824' 00:17:14.498 killing process with pid 647824 00:17:14.498 01:25:06 -- common/autotest_common.sh@945 -- # kill 647824 00:17:14.498 01:25:06 -- common/autotest_common.sh@950 -- # wait 647824 00:17:14.757 01:25:06 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:14.757 01:25:06 -- target/tls.sh@49 -- # local key hash crc 00:17:14.757 01:25:06 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:14.757 01:25:06 -- target/tls.sh@51 -- # hash=02 00:17:14.757 01:25:06 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:14.757 01:25:06 -- target/tls.sh@52 -- # gzip -1 -c 00:17:14.757 01:25:06 -- target/tls.sh@52 -- # tail -c8 00:17:14.757 01:25:06 -- target/tls.sh@52 -- # head -c 4 00:17:14.757 01:25:06 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:14.757 01:25:06 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:14.757 01:25:06 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:14.757 01:25:06 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:14.757 01:25:06 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:14.757 01:25:06 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:14.757 01:25:06 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:14.757 01:25:06 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:14.757 01:25:06 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:14.757 01:25:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:14.757 01:25:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:14.757 01:25:06 -- common/autotest_common.sh@10 -- # set +x 00:17:14.757 01:25:06 -- nvmf/common.sh@469 -- # nvmfpid=651893 00:17:14.757 01:25:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.757 01:25:06 -- nvmf/common.sh@470 -- # waitforlisten 651893 00:17:14.757 01:25:06 -- common/autotest_common.sh@819 -- # '[' -z 651893 ']' 00:17:14.757 01:25:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.757 01:25:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:14.757 01:25:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.757 01:25:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:14.757 01:25:06 -- common/autotest_common.sh@10 -- # set +x 00:17:14.757 [2024-07-27 01:25:06.466015] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:14.757 [2024-07-27 01:25:06.466118] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.757 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.015 [2024-07-27 01:25:06.531224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.015 [2024-07-27 01:25:06.637701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:15.015 [2024-07-27 01:25:06.637847] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.015 [2024-07-27 01:25:06.637862] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.015 [2024-07-27 01:25:06.637874] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.015 [2024-07-27 01:25:06.637901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.951 01:25:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:15.951 01:25:07 -- common/autotest_common.sh@852 -- # return 0 00:17:15.951 01:25:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:15.951 01:25:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:15.951 01:25:07 -- common/autotest_common.sh@10 -- # set +x 00:17:15.951 01:25:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.951 01:25:07 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:15.951 01:25:07 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:15.951 01:25:07 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.951 [2024-07-27 01:25:07.693114] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.209 01:25:07 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.209 01:25:07 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.466 [2024-07-27 01:25:08.170386] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.466 [2024-07-27 01:25:08.170631] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.466 01:25:08 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.723 malloc0 00:17:16.723 01:25:08 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:16.980 01:25:08 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:17.239 01:25:08 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:17.239 01:25:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:17.239 01:25:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:17.239 01:25:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:17.239 01:25:08 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:17.239 01:25:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.239 01:25:08 -- target/tls.sh@28 -- # bdevperf_pid=652197 00:17:17.239 01:25:08 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.239 01:25:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.239 01:25:08 -- target/tls.sh@31 -- # waitforlisten 652197 /var/tmp/bdevperf.sock 00:17:17.239 01:25:08 -- common/autotest_common.sh@819 -- # '[' -z 652197 ']' 00:17:17.239 01:25:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.239 01:25:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.239 01:25:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.239 01:25:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.239 01:25:08 -- common/autotest_common.sh@10 -- # set +x 00:17:17.498 [2024-07-27 01:25:09.020932] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:17.498 [2024-07-27 01:25:09.021027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid652197 ] 00:17:17.498 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.498 [2024-07-27 01:25:09.082298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.499 [2024-07-27 01:25:09.188969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.435 01:25:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.435 01:25:09 -- common/autotest_common.sh@852 -- # return 0 00:17:18.435 01:25:09 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:18.694 [2024-07-27 01:25:10.242973] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.694 TLSTESTn1 00:17:18.694 01:25:10 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:18.694 Running I/O for 10 seconds... 00:17:30.942 00:17:30.942 Latency(us) 00:17:30.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.942 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:30.942 Verification LBA range: start 0x0 length 0x2000 00:17:30.942 TLSTESTn1 : 10.03 2055.96 8.03 0.00 0.00 62166.95 4805.97 63302.92 00:17:30.942 =================================================================================================================== 00:17:30.942 Total : 2055.96 8.03 0.00 0.00 62166.95 4805.97 63302.92 00:17:30.942 0 00:17:30.942 01:25:20 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:30.942 01:25:20 -- target/tls.sh@45 -- # killprocess 652197 00:17:30.942 01:25:20 -- common/autotest_common.sh@926 -- # '[' -z 652197 ']' 00:17:30.942 01:25:20 -- common/autotest_common.sh@930 -- # kill -0 652197 00:17:30.942 01:25:20 -- common/autotest_common.sh@931 -- # uname 00:17:30.942 01:25:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:30.942 01:25:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 652197 00:17:30.942 01:25:20 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:30.942 01:25:20 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:30.942 01:25:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 652197' 00:17:30.942 killing process with pid 652197 00:17:30.942 01:25:20 -- common/autotest_common.sh@945 -- # kill 652197 00:17:30.942 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.942 00:17:30.942 Latency(us) 00:17:30.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.942 =================================================================================================================== 00:17:30.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.942 01:25:20 -- common/autotest_common.sh@950 -- # wait 652197 00:17:30.942 01:25:20 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:30.942 01:25:20 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:30.942 01:25:20 -- common/autotest_common.sh@640 -- # local es=0 00:17:30.942 01:25:20 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:30.942 01:25:20 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:30.942 01:25:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:30.942 01:25:20 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:30.942 01:25:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:30.942 01:25:20 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:30.942 01:25:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:30.942 01:25:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:30.942 01:25:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:30.942 01:25:20 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:30.942 01:25:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.942 01:25:20 -- target/tls.sh@28 -- # bdevperf_pid=653690 00:17:30.942 01:25:20 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.942 01:25:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.942 01:25:20 -- target/tls.sh@31 -- # waitforlisten 653690 /var/tmp/bdevperf.sock 00:17:30.942 01:25:20 -- common/autotest_common.sh@819 -- # '[' -z 653690 ']' 00:17:30.942 01:25:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.942 01:25:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.942 01:25:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.942 01:25:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.942 01:25:20 -- common/autotest_common.sh@10 -- # set +x 00:17:30.942 [2024-07-27 01:25:20.853394] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:30.942 [2024-07-27 01:25:20.853492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653690 ] 00:17:30.942 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.942 [2024-07-27 01:25:20.911965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.942 [2024-07-27 01:25:21.020112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.942 01:25:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.942 01:25:21 -- common/autotest_common.sh@852 -- # return 0 00:17:30.942 01:25:21 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:30.942 [2024-07-27 01:25:22.073376] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.942 [2024-07-27 01:25:22.073427] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:30.942 request: 00:17:30.942 { 00:17:30.942 "name": "TLSTEST", 00:17:30.942 "trtype": "tcp", 00:17:30.942 "traddr": "10.0.0.2", 00:17:30.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.942 "adrfam": "ipv4", 00:17:30.942 "trsvcid": "4420", 00:17:30.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.942 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:30.942 "method": "bdev_nvme_attach_controller", 00:17:30.942 "req_id": 1 00:17:30.942 } 00:17:30.942 Got JSON-RPC error response 00:17:30.942 response: 00:17:30.942 { 00:17:30.942 "code": -22, 00:17:30.942 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:30.942 } 00:17:30.942 01:25:22 -- target/tls.sh@36 -- # killprocess 653690 00:17:30.942 01:25:22 -- common/autotest_common.sh@926 -- # '[' -z 653690 ']' 00:17:30.942 01:25:22 -- common/autotest_common.sh@930 -- # kill -0 653690 00:17:30.942 01:25:22 -- common/autotest_common.sh@931 -- # uname 00:17:30.942 01:25:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:30.942 01:25:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 653690 00:17:30.942 01:25:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:30.942 01:25:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:30.942 01:25:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 653690' 00:17:30.942 killing process with pid 653690 00:17:30.942 01:25:22 -- common/autotest_common.sh@945 -- # kill 653690 00:17:30.942 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.942 00:17:30.942 Latency(us) 00:17:30.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.942 =================================================================================================================== 00:17:30.942 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:30.942 01:25:22 -- common/autotest_common.sh@950 -- # wait 653690 00:17:30.943 01:25:22 -- target/tls.sh@37 -- # return 1 00:17:30.943 01:25:22 -- common/autotest_common.sh@643 -- # es=1 00:17:30.943 01:25:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:30.943 01:25:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:30.943 01:25:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:30.943 01:25:22 -- target/tls.sh@183 -- # killprocess 651893 00:17:30.943 01:25:22 -- common/autotest_common.sh@926 -- # '[' -z 651893 ']' 00:17:30.943 01:25:22 -- common/autotest_common.sh@930 -- # kill -0 651893 00:17:30.943 01:25:22 -- common/autotest_common.sh@931 -- # uname 00:17:30.943 01:25:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:30.943 01:25:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 651893 00:17:30.943 01:25:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:30.943 01:25:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:30.943 01:25:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 651893' 00:17:30.943 killing process with pid 651893 00:17:30.943 01:25:22 -- common/autotest_common.sh@945 -- # kill 651893 00:17:30.943 01:25:22 -- common/autotest_common.sh@950 -- # wait 651893 00:17:30.943 01:25:22 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:30.943 01:25:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:30.943 01:25:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:30.943 01:25:22 -- common/autotest_common.sh@10 -- # set +x 00:17:30.943 01:25:22 -- nvmf/common.sh@469 -- # nvmfpid=653968 00:17:30.943 01:25:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.943 01:25:22 -- nvmf/common.sh@470 -- # waitforlisten 653968 00:17:30.943 01:25:22 -- common/autotest_common.sh@819 -- # '[' -z 653968 ']' 00:17:30.943 01:25:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.943 01:25:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.943 01:25:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.943 01:25:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.943 01:25:22 -- common/autotest_common.sh@10 -- # set +x 00:17:31.202 [2024-07-27 01:25:22.720408] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:31.202 [2024-07-27 01:25:22.720499] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.202 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.202 [2024-07-27 01:25:22.793356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.202 [2024-07-27 01:25:22.910306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:31.202 [2024-07-27 01:25:22.910479] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.202 [2024-07-27 01:25:22.910499] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.202 [2024-07-27 01:25:22.910514] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.202 [2024-07-27 01:25:22.910547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.138 01:25:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:32.138 01:25:23 -- common/autotest_common.sh@852 -- # return 0 00:17:32.138 01:25:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:32.138 01:25:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:32.138 01:25:23 -- common/autotest_common.sh@10 -- # set +x 00:17:32.138 01:25:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.138 01:25:23 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:32.138 01:25:23 -- common/autotest_common.sh@640 -- # local es=0 00:17:32.138 01:25:23 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:32.138 01:25:23 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:32.138 01:25:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:32.138 01:25:23 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:32.138 01:25:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:32.138 01:25:23 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:32.138 01:25:23 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:32.138 01:25:23 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:32.395 [2024-07-27 01:25:23.951324] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.395 01:25:23 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:32.654 01:25:24 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:32.915 [2024-07-27 01:25:24.420586] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.915 [2024-07-27 01:25:24.420837] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.915 01:25:24 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:32.915 malloc0 00:17:33.173 01:25:24 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:33.173 01:25:24 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:33.432 [2024-07-27 01:25:25.126738] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:33.432 [2024-07-27 01:25:25.126788] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:33.432 [2024-07-27 01:25:25.126822] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:33.432 request: 00:17:33.432 { 00:17:33.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.432 "host": "nqn.2016-06.io.spdk:host1", 00:17:33.432 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:33.432 "method": "nvmf_subsystem_add_host", 00:17:33.432 "req_id": 1 00:17:33.432 } 00:17:33.432 Got JSON-RPC error response 00:17:33.432 response: 00:17:33.432 { 00:17:33.432 "code": -32603, 00:17:33.432 "message": "Internal error" 00:17:33.432 } 00:17:33.432 01:25:25 -- common/autotest_common.sh@643 -- # es=1 00:17:33.432 01:25:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:33.432 01:25:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:33.432 01:25:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:33.432 01:25:25 -- target/tls.sh@189 -- # killprocess 653968 00:17:33.432 01:25:25 -- common/autotest_common.sh@926 -- # '[' -z 653968 ']' 00:17:33.432 01:25:25 -- common/autotest_common.sh@930 -- # kill -0 653968 00:17:33.432 01:25:25 -- common/autotest_common.sh@931 -- # uname 00:17:33.432 01:25:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:33.432 01:25:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 653968 00:17:33.432 01:25:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:33.432 01:25:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:33.432 01:25:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 653968' 00:17:33.432 killing process with pid 653968 00:17:33.432 01:25:25 -- common/autotest_common.sh@945 -- # kill 653968 00:17:33.432 01:25:25 -- common/autotest_common.sh@950 -- # wait 653968 00:17:33.998 01:25:25 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:33.998 01:25:25 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:33.998 01:25:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:33.998 01:25:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:33.998 01:25:25 -- common/autotest_common.sh@10 -- # set +x 00:17:33.998 01:25:25 -- nvmf/common.sh@469 -- # nvmfpid=654278 00:17:33.998 01:25:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:33.998 01:25:25 -- nvmf/common.sh@470 -- # waitforlisten 654278 00:17:33.998 01:25:25 -- common/autotest_common.sh@819 -- # '[' -z 654278 ']' 00:17:33.998 01:25:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.998 01:25:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:33.998 01:25:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.998 01:25:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:33.998 01:25:25 -- common/autotest_common.sh@10 -- # set +x 00:17:33.998 [2024-07-27 01:25:25.513655] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:33.998 [2024-07-27 01:25:25.513729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.998 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.998 [2024-07-27 01:25:25.575947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.998 [2024-07-27 01:25:25.679452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.998 [2024-07-27 01:25:25.679597] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.998 [2024-07-27 01:25:25.679614] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.998 [2024-07-27 01:25:25.679626] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.998 [2024-07-27 01:25:25.679653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.935 01:25:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.935 01:25:26 -- common/autotest_common.sh@852 -- # return 0 00:17:34.935 01:25:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.935 01:25:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:34.935 01:25:26 -- common/autotest_common.sh@10 -- # set +x 00:17:34.935 01:25:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.935 01:25:26 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:34.935 01:25:26 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:34.935 01:25:26 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:35.193 [2024-07-27 01:25:26.730784] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.193 01:25:26 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:35.451 01:25:27 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:35.709 [2024-07-27 01:25:27.276296] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.709 [2024-07-27 01:25:27.276545] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.709 01:25:27 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:35.967 malloc0 00:17:35.967 01:25:27 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:36.225 01:25:27 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:36.484 01:25:28 -- target/tls.sh@197 -- # bdevperf_pid=654580 00:17:36.484 01:25:28 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.484 01:25:28 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.484 01:25:28 -- target/tls.sh@200 -- # waitforlisten 654580 /var/tmp/bdevperf.sock 00:17:36.484 01:25:28 -- common/autotest_common.sh@819 -- # '[' -z 654580 ']' 00:17:36.484 01:25:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.484 01:25:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:36.484 01:25:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.484 01:25:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:36.484 01:25:28 -- common/autotest_common.sh@10 -- # set +x 00:17:36.484 [2024-07-27 01:25:28.132099] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:36.484 [2024-07-27 01:25:28.132183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654580 ] 00:17:36.484 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.484 [2024-07-27 01:25:28.189659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.743 [2024-07-27 01:25:28.298566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.679 01:25:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:37.679 01:25:29 -- common/autotest_common.sh@852 -- # return 0 00:17:37.679 01:25:29 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:37.679 [2024-07-27 01:25:29.289381] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.679 TLSTESTn1 00:17:37.679 01:25:29 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:37.938 01:25:29 -- target/tls.sh@205 -- # tgtconf='{ 00:17:37.938 "subsystems": [ 00:17:37.938 { 00:17:37.938 "subsystem": "iobuf", 00:17:37.938 "config": [ 00:17:37.938 { 00:17:37.938 "method": "iobuf_set_options", 00:17:37.938 "params": { 00:17:37.938 "small_pool_count": 8192, 00:17:37.938 "large_pool_count": 1024, 00:17:37.938 "small_bufsize": 8192, 00:17:37.938 "large_bufsize": 135168 00:17:37.938 } 00:17:37.938 } 00:17:37.938 ] 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "subsystem": "sock", 00:17:37.938 "config": [ 00:17:37.938 { 00:17:37.938 "method": "sock_impl_set_options", 00:17:37.938 "params": { 00:17:37.938 "impl_name": "posix", 00:17:37.938 "recv_buf_size": 2097152, 00:17:37.938 "send_buf_size": 2097152, 00:17:37.938 "enable_recv_pipe": true, 00:17:37.938 "enable_quickack": false, 00:17:37.938 "enable_placement_id": 0, 00:17:37.938 "enable_zerocopy_send_server": true, 00:17:37.938 "enable_zerocopy_send_client": false, 00:17:37.938 "zerocopy_threshold": 0, 00:17:37.938 "tls_version": 0, 00:17:37.938 "enable_ktls": false 00:17:37.938 } 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "method": "sock_impl_set_options", 00:17:37.938 "params": { 00:17:37.938 "impl_name": "ssl", 00:17:37.938 "recv_buf_size": 4096, 00:17:37.938 "send_buf_size": 4096, 00:17:37.938 "enable_recv_pipe": true, 00:17:37.938 "enable_quickack": false, 00:17:37.938 "enable_placement_id": 0, 00:17:37.938 "enable_zerocopy_send_server": true, 00:17:37.938 "enable_zerocopy_send_client": false, 00:17:37.938 "zerocopy_threshold": 0, 00:17:37.938 "tls_version": 0, 00:17:37.938 "enable_ktls": false 00:17:37.938 } 00:17:37.938 } 00:17:37.938 ] 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "subsystem": "vmd", 00:17:37.938 "config": [] 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "subsystem": "accel", 00:17:37.938 "config": [ 00:17:37.938 { 00:17:37.938 "method": "accel_set_options", 00:17:37.938 "params": { 00:17:37.938 "small_cache_size": 128, 00:17:37.938 "large_cache_size": 16, 00:17:37.938 "task_count": 2048, 00:17:37.938 "sequence_count": 2048, 00:17:37.938 "buf_count": 2048 00:17:37.938 } 00:17:37.938 } 00:17:37.938 ] 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "subsystem": "bdev", 00:17:37.938 "config": [ 00:17:37.938 { 00:17:37.938 "method": "bdev_set_options", 00:17:37.938 "params": { 00:17:37.938 "bdev_io_pool_size": 65535, 00:17:37.938 "bdev_io_cache_size": 256, 00:17:37.938 "bdev_auto_examine": true, 00:17:37.938 "iobuf_small_cache_size": 128, 00:17:37.938 "iobuf_large_cache_size": 16 00:17:37.938 } 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "method": "bdev_raid_set_options", 00:17:37.938 "params": { 00:17:37.938 "process_window_size_kb": 1024 00:17:37.938 } 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "method": "bdev_iscsi_set_options", 00:17:37.938 "params": { 00:17:37.938 "timeout_sec": 30 00:17:37.938 } 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "method": "bdev_nvme_set_options", 00:17:37.938 "params": { 00:17:37.938 "action_on_timeout": "none", 00:17:37.938 "timeout_us": 0, 00:17:37.938 "timeout_admin_us": 0, 00:17:37.938 "keep_alive_timeout_ms": 10000, 00:17:37.938 "transport_retry_count": 4, 00:17:37.938 "arbitration_burst": 0, 00:17:37.938 "low_priority_weight": 0, 00:17:37.938 "medium_priority_weight": 0, 00:17:37.938 "high_priority_weight": 0, 00:17:37.938 "nvme_adminq_poll_period_us": 10000, 00:17:37.938 "nvme_ioq_poll_period_us": 0, 00:17:37.938 "io_queue_requests": 0, 00:17:37.938 "delay_cmd_submit": true, 00:17:37.938 "bdev_retry_count": 3, 00:17:37.938 "transport_ack_timeout": 0, 00:17:37.938 "ctrlr_loss_timeout_sec": 0, 00:17:37.938 "reconnect_delay_sec": 0, 00:17:37.938 "fast_io_fail_timeout_sec": 0, 00:17:37.938 "generate_uuids": false, 00:17:37.938 "transport_tos": 0, 00:17:37.938 "io_path_stat": false, 00:17:37.938 "allow_accel_sequence": false 00:17:37.938 } 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "method": "bdev_nvme_set_hotplug", 00:17:37.938 "params": { 00:17:37.938 "period_us": 100000, 00:17:37.938 "enable": false 00:17:37.938 } 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "method": "bdev_malloc_create", 00:17:37.938 "params": { 00:17:37.938 "name": "malloc0", 00:17:37.938 "num_blocks": 8192, 00:17:37.938 "block_size": 4096, 00:17:37.938 "physical_block_size": 4096, 00:17:37.938 "uuid": "f1af1d62-9aad-4e87-a904-1b7be603f92a", 00:17:37.938 "optimal_io_boundary": 0 00:17:37.938 } 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "method": "bdev_wait_for_examine" 00:17:37.938 } 00:17:37.938 ] 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "subsystem": "nbd", 00:17:37.938 "config": [] 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "subsystem": "scheduler", 00:17:37.938 "config": [ 00:17:37.938 { 00:17:37.938 "method": "framework_set_scheduler", 00:17:37.938 "params": { 00:17:37.938 "name": "static" 00:17:37.938 } 00:17:37.938 } 00:17:37.938 ] 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "subsystem": "nvmf", 00:17:37.938 "config": [ 00:17:37.938 { 00:17:37.938 "method": "nvmf_set_config", 00:17:37.938 "params": { 00:17:37.938 "discovery_filter": "match_any", 00:17:37.938 "admin_cmd_passthru": { 00:17:37.938 "identify_ctrlr": false 00:17:37.938 } 00:17:37.938 } 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "method": "nvmf_set_max_subsystems", 00:17:37.939 "params": { 00:17:37.939 "max_subsystems": 1024 00:17:37.939 } 00:17:37.939 }, 00:17:37.939 { 00:17:37.939 "method": "nvmf_set_crdt", 00:17:37.939 "params": { 00:17:37.939 "crdt1": 0, 00:17:37.939 "crdt2": 0, 00:17:37.939 "crdt3": 0 00:17:37.939 } 00:17:37.939 }, 00:17:37.939 { 00:17:37.939 "method": "nvmf_create_transport", 00:17:37.939 "params": { 00:17:37.939 "trtype": "TCP", 00:17:37.939 "max_queue_depth": 128, 00:17:37.939 "max_io_qpairs_per_ctrlr": 127, 00:17:37.939 "in_capsule_data_size": 4096, 00:17:37.939 "max_io_size": 131072, 00:17:37.939 "io_unit_size": 131072, 00:17:37.939 "max_aq_depth": 128, 00:17:37.939 "num_shared_buffers": 511, 00:17:37.939 "buf_cache_size": 4294967295, 00:17:37.939 "dif_insert_or_strip": false, 00:17:37.939 "zcopy": false, 00:17:37.939 "c2h_success": false, 00:17:37.939 "sock_priority": 0, 00:17:37.939 "abort_timeout_sec": 1 00:17:37.939 } 00:17:37.939 }, 00:17:37.939 { 00:17:37.939 "method": "nvmf_create_subsystem", 00:17:37.939 "params": { 00:17:37.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.939 "allow_any_host": false, 00:17:37.939 "serial_number": "SPDK00000000000001", 00:17:37.939 "model_number": "SPDK bdev Controller", 00:17:37.939 "max_namespaces": 10, 00:17:37.939 "min_cntlid": 1, 00:17:37.939 "max_cntlid": 65519, 00:17:37.939 "ana_reporting": false 00:17:37.939 } 00:17:37.939 }, 00:17:37.939 { 00:17:37.939 "method": "nvmf_subsystem_add_host", 00:17:37.939 "params": { 00:17:37.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.939 "host": "nqn.2016-06.io.spdk:host1", 00:17:37.939 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:37.939 } 00:17:37.939 }, 00:17:37.939 { 00:17:37.939 "method": "nvmf_subsystem_add_ns", 00:17:37.939 "params": { 00:17:37.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.939 "namespace": { 00:17:37.939 "nsid": 1, 00:17:37.939 "bdev_name": "malloc0", 00:17:37.939 "nguid": "F1AF1D629AAD4E87A9041B7BE603F92A", 00:17:37.939 "uuid": "f1af1d62-9aad-4e87-a904-1b7be603f92a" 00:17:37.939 } 00:17:37.939 } 00:17:37.939 }, 00:17:37.939 { 00:17:37.939 "method": "nvmf_subsystem_add_listener", 00:17:37.939 "params": { 00:17:37.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.939 "listen_address": { 00:17:37.939 "trtype": "TCP", 00:17:37.939 "adrfam": "IPv4", 00:17:37.939 "traddr": "10.0.0.2", 00:17:37.939 "trsvcid": "4420" 00:17:37.939 }, 00:17:37.939 "secure_channel": true 00:17:37.939 } 00:17:37.939 } 00:17:37.939 ] 00:17:37.939 } 00:17:37.939 ] 00:17:37.939 }' 00:17:37.939 01:25:29 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:38.508 01:25:29 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:38.508 "subsystems": [ 00:17:38.508 { 00:17:38.508 "subsystem": "iobuf", 00:17:38.508 "config": [ 00:17:38.508 { 00:17:38.508 "method": "iobuf_set_options", 00:17:38.508 "params": { 00:17:38.508 "small_pool_count": 8192, 00:17:38.508 "large_pool_count": 1024, 00:17:38.508 "small_bufsize": 8192, 00:17:38.508 "large_bufsize": 135168 00:17:38.508 } 00:17:38.508 } 00:17:38.508 ] 00:17:38.508 }, 00:17:38.508 { 00:17:38.508 "subsystem": "sock", 00:17:38.508 "config": [ 00:17:38.508 { 00:17:38.508 "method": "sock_impl_set_options", 00:17:38.508 "params": { 00:17:38.508 "impl_name": "posix", 00:17:38.508 "recv_buf_size": 2097152, 00:17:38.508 "send_buf_size": 2097152, 00:17:38.508 "enable_recv_pipe": true, 00:17:38.508 "enable_quickack": false, 00:17:38.508 "enable_placement_id": 0, 00:17:38.508 "enable_zerocopy_send_server": true, 00:17:38.508 "enable_zerocopy_send_client": false, 00:17:38.508 "zerocopy_threshold": 0, 00:17:38.508 "tls_version": 0, 00:17:38.508 "enable_ktls": false 00:17:38.508 } 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "method": "sock_impl_set_options", 00:17:38.509 "params": { 00:17:38.509 "impl_name": "ssl", 00:17:38.509 "recv_buf_size": 4096, 00:17:38.509 "send_buf_size": 4096, 00:17:38.509 "enable_recv_pipe": true, 00:17:38.509 "enable_quickack": false, 00:17:38.509 "enable_placement_id": 0, 00:17:38.509 "enable_zerocopy_send_server": true, 00:17:38.509 "enable_zerocopy_send_client": false, 00:17:38.509 "zerocopy_threshold": 0, 00:17:38.509 "tls_version": 0, 00:17:38.509 "enable_ktls": false 00:17:38.509 } 00:17:38.509 } 00:17:38.509 ] 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "subsystem": "vmd", 00:17:38.509 "config": [] 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "subsystem": "accel", 00:17:38.509 "config": [ 00:17:38.509 { 00:17:38.509 "method": "accel_set_options", 00:17:38.509 "params": { 00:17:38.509 "small_cache_size": 128, 00:17:38.509 "large_cache_size": 16, 00:17:38.509 "task_count": 2048, 00:17:38.509 "sequence_count": 2048, 00:17:38.509 "buf_count": 2048 00:17:38.509 } 00:17:38.509 } 00:17:38.509 ] 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "subsystem": "bdev", 00:17:38.509 "config": [ 00:17:38.509 { 00:17:38.509 "method": "bdev_set_options", 00:17:38.509 "params": { 00:17:38.509 "bdev_io_pool_size": 65535, 00:17:38.509 "bdev_io_cache_size": 256, 00:17:38.509 "bdev_auto_examine": true, 00:17:38.509 "iobuf_small_cache_size": 128, 00:17:38.509 "iobuf_large_cache_size": 16 00:17:38.509 } 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "method": "bdev_raid_set_options", 00:17:38.509 "params": { 00:17:38.509 "process_window_size_kb": 1024 00:17:38.509 } 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "method": "bdev_iscsi_set_options", 00:17:38.509 "params": { 00:17:38.509 "timeout_sec": 30 00:17:38.509 } 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "method": "bdev_nvme_set_options", 00:17:38.509 "params": { 00:17:38.509 "action_on_timeout": "none", 00:17:38.509 "timeout_us": 0, 00:17:38.509 "timeout_admin_us": 0, 00:17:38.509 "keep_alive_timeout_ms": 10000, 00:17:38.509 "transport_retry_count": 4, 00:17:38.509 "arbitration_burst": 0, 00:17:38.509 "low_priority_weight": 0, 00:17:38.509 "medium_priority_weight": 0, 00:17:38.509 "high_priority_weight": 0, 00:17:38.509 "nvme_adminq_poll_period_us": 10000, 00:17:38.509 "nvme_ioq_poll_period_us": 0, 00:17:38.509 "io_queue_requests": 512, 00:17:38.509 "delay_cmd_submit": true, 00:17:38.509 "bdev_retry_count": 3, 00:17:38.509 "transport_ack_timeout": 0, 00:17:38.509 "ctrlr_loss_timeout_sec": 0, 00:17:38.509 "reconnect_delay_sec": 0, 00:17:38.509 "fast_io_fail_timeout_sec": 0, 00:17:38.509 "generate_uuids": false, 00:17:38.509 "transport_tos": 0, 00:17:38.509 "io_path_stat": false, 00:17:38.509 "allow_accel_sequence": false 00:17:38.509 } 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "method": "bdev_nvme_attach_controller", 00:17:38.509 "params": { 00:17:38.509 "name": "TLSTEST", 00:17:38.509 "trtype": "TCP", 00:17:38.509 "adrfam": "IPv4", 00:17:38.509 "traddr": "10.0.0.2", 00:17:38.509 "trsvcid": "4420", 00:17:38.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.509 "prchk_reftag": false, 00:17:38.509 "prchk_guard": false, 00:17:38.509 "ctrlr_loss_timeout_sec": 0, 00:17:38.509 "reconnect_delay_sec": 0, 00:17:38.509 "fast_io_fail_timeout_sec": 0, 00:17:38.509 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:38.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.509 "hdgst": false, 00:17:38.509 "ddgst": false 00:17:38.509 } 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "method": "bdev_nvme_set_hotplug", 00:17:38.509 "params": { 00:17:38.509 "period_us": 100000, 00:17:38.509 "enable": false 00:17:38.509 } 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "method": "bdev_wait_for_examine" 00:17:38.509 } 00:17:38.509 ] 00:17:38.509 }, 00:17:38.509 { 00:17:38.509 "subsystem": "nbd", 00:17:38.509 "config": [] 00:17:38.509 } 00:17:38.509 ] 00:17:38.509 }' 00:17:38.509 01:25:29 -- target/tls.sh@208 -- # killprocess 654580 00:17:38.509 01:25:29 -- common/autotest_common.sh@926 -- # '[' -z 654580 ']' 00:17:38.509 01:25:29 -- common/autotest_common.sh@930 -- # kill -0 654580 00:17:38.509 01:25:29 -- common/autotest_common.sh@931 -- # uname 00:17:38.509 01:25:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.509 01:25:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 654580 00:17:38.509 01:25:29 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:38.509 01:25:29 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:38.509 01:25:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 654580' 00:17:38.509 killing process with pid 654580 00:17:38.509 01:25:29 -- common/autotest_common.sh@945 -- # kill 654580 00:17:38.509 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.509 00:17:38.509 Latency(us) 00:17:38.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.509 =================================================================================================================== 00:17:38.509 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.509 01:25:29 -- common/autotest_common.sh@950 -- # wait 654580 00:17:38.509 01:25:30 -- target/tls.sh@209 -- # killprocess 654278 00:17:38.509 01:25:30 -- common/autotest_common.sh@926 -- # '[' -z 654278 ']' 00:17:38.509 01:25:30 -- common/autotest_common.sh@930 -- # kill -0 654278 00:17:38.509 01:25:30 -- common/autotest_common.sh@931 -- # uname 00:17:38.509 01:25:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.509 01:25:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 654278 00:17:38.767 01:25:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:38.767 01:25:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:38.767 01:25:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 654278' 00:17:38.767 killing process with pid 654278 00:17:38.767 01:25:30 -- common/autotest_common.sh@945 -- # kill 654278 00:17:38.767 01:25:30 -- common/autotest_common.sh@950 -- # wait 654278 00:17:39.027 01:25:30 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:39.027 01:25:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:39.027 01:25:30 -- target/tls.sh@212 -- # echo '{ 00:17:39.027 "subsystems": [ 00:17:39.027 { 00:17:39.027 "subsystem": "iobuf", 00:17:39.027 "config": [ 00:17:39.027 { 00:17:39.027 "method": "iobuf_set_options", 00:17:39.027 "params": { 00:17:39.027 "small_pool_count": 8192, 00:17:39.027 "large_pool_count": 1024, 00:17:39.027 "small_bufsize": 8192, 00:17:39.027 "large_bufsize": 135168 00:17:39.027 } 00:17:39.027 } 00:17:39.027 ] 00:17:39.027 }, 00:17:39.027 { 00:17:39.027 "subsystem": "sock", 00:17:39.027 "config": [ 00:17:39.027 { 00:17:39.027 "method": "sock_impl_set_options", 00:17:39.027 "params": { 00:17:39.027 "impl_name": "posix", 00:17:39.027 "recv_buf_size": 2097152, 00:17:39.027 "send_buf_size": 2097152, 00:17:39.027 "enable_recv_pipe": true, 00:17:39.027 "enable_quickack": false, 00:17:39.027 "enable_placement_id": 0, 00:17:39.027 "enable_zerocopy_send_server": true, 00:17:39.027 "enable_zerocopy_send_client": false, 00:17:39.027 "zerocopy_threshold": 0, 00:17:39.027 "tls_version": 0, 00:17:39.027 "enable_ktls": false 00:17:39.027 } 00:17:39.027 }, 00:17:39.027 { 00:17:39.027 "method": "sock_impl_set_options", 00:17:39.027 "params": { 00:17:39.027 "impl_name": "ssl", 00:17:39.027 "recv_buf_size": 4096, 00:17:39.027 "send_buf_size": 4096, 00:17:39.027 "enable_recv_pipe": true, 00:17:39.027 "enable_quickack": false, 00:17:39.027 "enable_placement_id": 0, 00:17:39.027 "enable_zerocopy_send_server": true, 00:17:39.027 "enable_zerocopy_send_client": false, 00:17:39.027 "zerocopy_threshold": 0, 00:17:39.027 "tls_version": 0, 00:17:39.027 "enable_ktls": false 00:17:39.027 } 00:17:39.027 } 00:17:39.027 ] 00:17:39.027 }, 00:17:39.027 { 00:17:39.027 "subsystem": "vmd", 00:17:39.027 "config": [] 00:17:39.027 }, 00:17:39.027 { 00:17:39.027 "subsystem": "accel", 00:17:39.027 "config": [ 00:17:39.027 { 00:17:39.027 "method": "accel_set_options", 00:17:39.027 "params": { 00:17:39.027 "small_cache_size": 128, 00:17:39.027 "large_cache_size": 16, 00:17:39.027 "task_count": 2048, 00:17:39.027 "sequence_count": 2048, 00:17:39.027 "buf_count": 2048 00:17:39.027 } 00:17:39.027 } 00:17:39.027 ] 00:17:39.027 }, 00:17:39.027 { 00:17:39.027 "subsystem": "bdev", 00:17:39.027 "config": [ 00:17:39.027 { 00:17:39.027 "method": "bdev_set_options", 00:17:39.027 "params": { 00:17:39.027 "bdev_io_pool_size": 65535, 00:17:39.027 "bdev_io_cache_size": 256, 00:17:39.027 "bdev_auto_examine": true, 00:17:39.027 "iobuf_small_cache_size": 128, 00:17:39.027 "iobuf_large_cache_size": 16 00:17:39.027 } 00:17:39.027 }, 00:17:39.027 { 00:17:39.027 "method": "bdev_raid_set_options", 00:17:39.027 "params": { 00:17:39.027 "process_window_size_kb": 1024 00:17:39.027 } 00:17:39.027 }, 00:17:39.027 { 00:17:39.027 "method": "bdev_iscsi_set_options", 00:17:39.027 "params": { 00:17:39.027 "timeout_sec": 30 00:17:39.027 } 00:17:39.027 }, 00:17:39.027 { 00:17:39.027 "method": "bdev_nvme_set_options", 00:17:39.027 "params": { 00:17:39.027 "action_on_timeout": "none", 00:17:39.027 "timeout_us": 0, 00:17:39.028 "timeout_admin_us": 0, 00:17:39.028 "keep_alive_timeout_ms": 10000, 00:17:39.028 "transport_retry_count": 4, 00:17:39.028 "arbitration_burst": 0, 00:17:39.028 "low_priority_weight": 0, 00:17:39.028 "medium_priority_weight": 0, 00:17:39.028 "high_priority_weight": 0, 00:17:39.028 "nvme_adminq_poll_period_us": 10000, 00:17:39.028 "nvme_ioq_poll_period_us": 0, 00:17:39.028 "io_queue_requests": 0, 00:17:39.028 "delay_cmd_submit": true, 00:17:39.028 "bdev_retry_count": 3, 00:17:39.028 "transport_ack_timeout": 0, 00:17:39.028 "ctrlr_loss_timeout_sec": 0, 00:17:39.028 "reconnect_delay_sec": 0, 00:17:39.028 "fast_io_fail_timeout_sec": 0, 00:17:39.028 "generate_uuids": false, 00:17:39.028 "transport_tos": 0, 00:17:39.028 "io_path_stat": false, 00:17:39.028 "allow_accel_sequence": false 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "bdev_nvme_set_hotplug", 00:17:39.028 "params": { 00:17:39.028 "period_us": 100000, 00:17:39.028 "enable": false 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "bdev_malloc_create", 00:17:39.028 "params": { 00:17:39.028 "name": "malloc0", 00:17:39.028 "num_blocks": 8192, 00:17:39.028 "block_size": 4096, 00:17:39.028 "physical_block_size": 4096, 00:17:39.028 "uuid": "f1af1d62-9aad-4e87-a904-1b7be603f92a", 00:17:39.028 "optimal_io_boundary": 0 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "bdev_wait_for_examine" 00:17:39.028 } 00:17:39.028 ] 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "subsystem": "nbd", 00:17:39.028 "config": [] 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "subsystem": "scheduler", 00:17:39.028 "config": [ 00:17:39.028 { 00:17:39.028 "method": "framework_set_scheduler", 00:17:39.028 "params": { 00:17:39.028 "name": "static" 00:17:39.028 } 00:17:39.028 } 00:17:39.028 ] 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "subsystem": "nvmf", 00:17:39.028 "config": [ 00:17:39.028 { 00:17:39.028 "method": "nvmf_set_config", 00:17:39.028 "params": { 00:17:39.028 "discovery_filter": "match_any", 00:17:39.028 "admin_cmd_passthru": { 00:17:39.028 "identify_ctrlr": false 00:17:39.028 } 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "nvmf_set_max_subsystems", 00:17:39.028 "params": { 00:17:39.028 "max_subsystems": 1024 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "nvmf_set_crdt", 00:17:39.028 "params": { 00:17:39.028 "crdt1": 0, 00:17:39.028 "crdt2": 0, 00:17:39.028 "crdt3": 0 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "nvmf_create_transport", 00:17:39.028 "params": { 00:17:39.028 "trtype": "TCP", 00:17:39.028 "max_queue_depth": 128, 00:17:39.028 "max_io_qpairs_per_ctrlr": 127, 00:17:39.028 "in_capsule_data_size": 4096, 00:17:39.028 "max_io_size": 131072, 00:17:39.028 "io_unit_size": 131072, 00:17:39.028 "max_aq_depth": 128, 00:17:39.028 "num_shared_buffers": 511, 00:17:39.028 "buf_cache_size": 4294967295, 00:17:39.028 "dif_insert_or_strip": false, 00:17:39.028 "zcopy": false, 00:17:39.028 "c2h_success": false, 00:17:39.028 "sock_priority": 0, 00:17:39.028 "abort_timeout_sec": 1 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "nvmf_create_subsystem", 00:17:39.028 "params": { 00:17:39.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.028 "allow_any_host": false, 00:17:39.028 "serial_number": "SPDK00000000000001", 00:17:39.028 "model_number": "SPDK bdev Controller", 00:17:39.028 "max_namespaces": 10, 00:17:39.028 "min_cntlid": 1, 00:17:39.028 "max_cntlid": 65519, 00:17:39.028 "ana_reporting": false 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "nvmf_subsystem_add_host", 00:17:39.028 "params": { 00:17:39.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.028 "host": "nqn.2016-06.io.spdk:host1", 00:17:39.028 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "nvmf_subsystem_add_ns", 00:17:39.028 "params": { 00:17:39.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.028 "namespace": { 00:17:39.028 "nsid": 1, 00:17:39.028 "bdev_name": "malloc0", 00:17:39.028 "nguid": "F1AF1D629AAD4E87A9041B7BE603F92A", 00:17:39.028 "uuid": "f1af1d62-9aad-4e87-a904-1b7be603f92a" 00:17:39.028 } 00:17:39.028 } 00:17:39.028 }, 00:17:39.028 { 00:17:39.028 "method": "nvmf_subsystem_add_listener", 00:17:39.028 "params": { 00:17:39.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.028 "listen_address": { 00:17:39.028 "trtype": "TCP", 00:17:39.028 "adrfam": "IPv4", 00:17:39.028 "traddr": "10.0.0.2", 00:17:39.028 "trsvcid": "4420" 00:17:39.028 }, 00:17:39.028 "secure_channel": true 00:17:39.028 } 00:17:39.028 } 00:17:39.028 ] 00:17:39.028 } 00:17:39.028 ] 00:17:39.028 }' 00:17:39.028 01:25:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:39.028 01:25:30 -- common/autotest_common.sh@10 -- # set +x 00:17:39.028 01:25:30 -- nvmf/common.sh@469 -- # nvmfpid=654997 00:17:39.028 01:25:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:39.028 01:25:30 -- nvmf/common.sh@470 -- # waitforlisten 654997 00:17:39.028 01:25:30 -- common/autotest_common.sh@819 -- # '[' -z 654997 ']' 00:17:39.028 01:25:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.028 01:25:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:39.028 01:25:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.028 01:25:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:39.028 01:25:30 -- common/autotest_common.sh@10 -- # set +x 00:17:39.028 [2024-07-27 01:25:30.598650] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:39.028 [2024-07-27 01:25:30.598727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.028 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.028 [2024-07-27 01:25:30.661185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.028 [2024-07-27 01:25:30.766772] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:39.028 [2024-07-27 01:25:30.766931] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.028 [2024-07-27 01:25:30.766949] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.028 [2024-07-27 01:25:30.766960] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.028 [2024-07-27 01:25:30.766987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.288 [2024-07-27 01:25:30.998151] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.288 [2024-07-27 01:25:31.030185] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:39.288 [2024-07-27 01:25:31.030452] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.856 01:25:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:39.856 01:25:31 -- common/autotest_common.sh@852 -- # return 0 00:17:39.856 01:25:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:39.856 01:25:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:39.856 01:25:31 -- common/autotest_common.sh@10 -- # set +x 00:17:39.856 01:25:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.856 01:25:31 -- target/tls.sh@216 -- # bdevperf_pid=655076 00:17:39.856 01:25:31 -- target/tls.sh@217 -- # waitforlisten 655076 /var/tmp/bdevperf.sock 00:17:39.856 01:25:31 -- common/autotest_common.sh@819 -- # '[' -z 655076 ']' 00:17:39.856 01:25:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.856 01:25:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:39.856 01:25:31 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:39.856 01:25:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.856 01:25:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:39.856 01:25:31 -- common/autotest_common.sh@10 -- # set +x 00:17:39.856 01:25:31 -- target/tls.sh@213 -- # echo '{ 00:17:39.856 "subsystems": [ 00:17:39.856 { 00:17:39.856 "subsystem": "iobuf", 00:17:39.856 "config": [ 00:17:39.856 { 00:17:39.856 "method": "iobuf_set_options", 00:17:39.856 "params": { 00:17:39.856 "small_pool_count": 8192, 00:17:39.856 "large_pool_count": 1024, 00:17:39.856 "small_bufsize": 8192, 00:17:39.856 "large_bufsize": 135168 00:17:39.856 } 00:17:39.856 } 00:17:39.856 ] 00:17:39.856 }, 00:17:39.856 { 00:17:39.856 "subsystem": "sock", 00:17:39.856 "config": [ 00:17:39.856 { 00:17:39.856 "method": "sock_impl_set_options", 00:17:39.856 "params": { 00:17:39.856 "impl_name": "posix", 00:17:39.856 "recv_buf_size": 2097152, 00:17:39.856 "send_buf_size": 2097152, 00:17:39.856 "enable_recv_pipe": true, 00:17:39.856 "enable_quickack": false, 00:17:39.856 "enable_placement_id": 0, 00:17:39.856 "enable_zerocopy_send_server": true, 00:17:39.856 "enable_zerocopy_send_client": false, 00:17:39.856 "zerocopy_threshold": 0, 00:17:39.856 "tls_version": 0, 00:17:39.856 "enable_ktls": false 00:17:39.856 } 00:17:39.856 }, 00:17:39.856 { 00:17:39.856 "method": "sock_impl_set_options", 00:17:39.856 "params": { 00:17:39.856 "impl_name": "ssl", 00:17:39.856 "recv_buf_size": 4096, 00:17:39.856 "send_buf_size": 4096, 00:17:39.856 "enable_recv_pipe": true, 00:17:39.856 "enable_quickack": false, 00:17:39.856 "enable_placement_id": 0, 00:17:39.856 "enable_zerocopy_send_server": true, 00:17:39.856 "enable_zerocopy_send_client": false, 00:17:39.856 "zerocopy_threshold": 0, 00:17:39.856 "tls_version": 0, 00:17:39.856 "enable_ktls": false 00:17:39.856 } 00:17:39.856 } 00:17:39.856 ] 00:17:39.856 }, 00:17:39.856 { 00:17:39.856 "subsystem": "vmd", 00:17:39.856 "config": [] 00:17:39.856 }, 00:17:39.856 { 00:17:39.856 "subsystem": "accel", 00:17:39.856 "config": [ 00:17:39.856 { 00:17:39.856 "method": "accel_set_options", 00:17:39.856 "params": { 00:17:39.856 "small_cache_size": 128, 00:17:39.856 "large_cache_size": 16, 00:17:39.856 "task_count": 2048, 00:17:39.856 "sequence_count": 2048, 00:17:39.856 "buf_count": 2048 00:17:39.856 } 00:17:39.856 } 00:17:39.856 ] 00:17:39.856 }, 00:17:39.856 { 00:17:39.856 "subsystem": "bdev", 00:17:39.856 "config": [ 00:17:39.856 { 00:17:39.856 "method": "bdev_set_options", 00:17:39.856 "params": { 00:17:39.856 "bdev_io_pool_size": 65535, 00:17:39.856 "bdev_io_cache_size": 256, 00:17:39.856 "bdev_auto_examine": true, 00:17:39.856 "iobuf_small_cache_size": 128, 00:17:39.856 "iobuf_large_cache_size": 16 00:17:39.856 } 00:17:39.856 }, 00:17:39.856 { 00:17:39.856 "method": "bdev_raid_set_options", 00:17:39.857 "params": { 00:17:39.857 "process_window_size_kb": 1024 00:17:39.857 } 00:17:39.857 }, 00:17:39.857 { 00:17:39.857 "method": "bdev_iscsi_set_options", 00:17:39.857 "params": { 00:17:39.857 "timeout_sec": 30 00:17:39.857 } 00:17:39.857 }, 00:17:39.857 { 00:17:39.857 "method": "bdev_nvme_set_options", 00:17:39.857 "params": { 00:17:39.857 "action_on_timeout": "none", 00:17:39.857 "timeout_us": 0, 00:17:39.857 "timeout_admin_us": 0, 00:17:39.857 "keep_alive_timeout_ms": 10000, 00:17:39.857 "transport_retry_count": 4, 00:17:39.857 "arbitration_burst": 0, 00:17:39.857 "low_priority_weight": 0, 00:17:39.857 "medium_priority_weight": 0, 00:17:39.857 "high_priority_weight": 0, 00:17:39.857 "nvme_adminq_poll_period_us": 10000, 00:17:39.857 "nvme_ioq_poll_period_us": 0, 00:17:39.857 "io_queue_requests": 512, 00:17:39.857 "delay_cmd_submit": true, 00:17:39.857 "bdev_retry_count": 3, 00:17:39.857 "transport_ack_timeout": 0, 00:17:39.857 "ctrlr_loss_timeout_sec": 0, 00:17:39.857 "reconnect_delay_sec": 0, 00:17:39.857 "fast_io_fail_timeout_sec": 0, 00:17:39.857 "generate_uuids": false, 00:17:39.857 "transport_tos": 0, 00:17:39.857 "io_path_stat": false, 00:17:39.857 "allow_accel_sequence": false 00:17:39.857 } 00:17:39.857 }, 00:17:39.857 { 00:17:39.857 "method": "bdev_nvme_attach_controller", 00:17:39.857 "params": { 00:17:39.857 "name": "TLSTEST", 00:17:39.857 "trtype": "TCP", 00:17:39.857 "adrfam": "IPv4", 00:17:39.857 "traddr": "10.0.0.2", 00:17:39.857 "trsvcid": "4420", 00:17:39.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.857 "prchk_reftag": false, 00:17:39.857 "prchk_guard": false, 00:17:39.857 "ctrlr_loss_timeout_sec": 0, 00:17:39.857 "reconnect_delay_sec": 0, 00:17:39.857 "fast_io_fail_timeout_sec": 0, 00:17:39.857 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:39.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.857 "hdgst": false, 00:17:39.857 "ddgst": false 00:17:39.857 } 00:17:39.857 }, 00:17:39.857 { 00:17:39.857 "method": "bdev_nvme_set_hotplug", 00:17:39.857 "params": { 00:17:39.857 "period_us": 100000, 00:17:39.857 "enable": false 00:17:39.857 } 00:17:39.857 }, 00:17:39.857 { 00:17:39.857 "method": "bdev_wait_for_examine" 00:17:39.857 } 00:17:39.857 ] 00:17:39.857 }, 00:17:39.857 { 00:17:39.857 "subsystem": "nbd", 00:17:39.857 "config": [] 00:17:39.857 } 00:17:39.857 ] 00:17:39.857 }' 00:17:40.115 [2024-07-27 01:25:31.637520] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:40.115 [2024-07-27 01:25:31.637590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655076 ] 00:17:40.115 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.115 [2024-07-27 01:25:31.695756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.115 [2024-07-27 01:25:31.802391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.375 [2024-07-27 01:25:31.966278] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.942 01:25:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:40.942 01:25:32 -- common/autotest_common.sh@852 -- # return 0 00:17:40.942 01:25:32 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:41.202 Running I/O for 10 seconds... 00:17:51.175 00:17:51.175 Latency(us) 00:17:51.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.175 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:51.175 Verification LBA range: start 0x0 length 0x2000 00:17:51.175 TLSTESTn1 : 10.03 2044.13 7.98 0.00 0.00 62520.37 8689.59 65633.09 00:17:51.175 =================================================================================================================== 00:17:51.175 Total : 2044.13 7.98 0.00 0.00 62520.37 8689.59 65633.09 00:17:51.175 0 00:17:51.175 01:25:42 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.175 01:25:42 -- target/tls.sh@223 -- # killprocess 655076 00:17:51.175 01:25:42 -- common/autotest_common.sh@926 -- # '[' -z 655076 ']' 00:17:51.175 01:25:42 -- common/autotest_common.sh@930 -- # kill -0 655076 00:17:51.175 01:25:42 -- common/autotest_common.sh@931 -- # uname 00:17:51.175 01:25:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:51.175 01:25:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 655076 00:17:51.175 01:25:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:51.175 01:25:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:51.175 01:25:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 655076' 00:17:51.175 killing process with pid 655076 00:17:51.175 01:25:42 -- common/autotest_common.sh@945 -- # kill 655076 00:17:51.175 Received shutdown signal, test time was about 10.000000 seconds 00:17:51.175 00:17:51.175 Latency(us) 00:17:51.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.175 =================================================================================================================== 00:17:51.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.175 01:25:42 -- common/autotest_common.sh@950 -- # wait 655076 00:17:51.435 01:25:43 -- target/tls.sh@224 -- # killprocess 654997 00:17:51.435 01:25:43 -- common/autotest_common.sh@926 -- # '[' -z 654997 ']' 00:17:51.435 01:25:43 -- common/autotest_common.sh@930 -- # kill -0 654997 00:17:51.435 01:25:43 -- common/autotest_common.sh@931 -- # uname 00:17:51.435 01:25:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:51.435 01:25:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 654997 00:17:51.435 01:25:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:51.435 01:25:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:51.435 01:25:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 654997' 00:17:51.435 killing process with pid 654997 00:17:51.435 01:25:43 -- common/autotest_common.sh@945 -- # kill 654997 00:17:51.435 01:25:43 -- common/autotest_common.sh@950 -- # wait 654997 00:17:51.720 01:25:43 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:51.720 01:25:43 -- target/tls.sh@227 -- # cleanup 00:17:51.720 01:25:43 -- target/tls.sh@15 -- # process_shm --id 0 00:17:51.720 01:25:43 -- common/autotest_common.sh@796 -- # type=--id 00:17:51.720 01:25:43 -- common/autotest_common.sh@797 -- # id=0 00:17:51.720 01:25:43 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:51.720 01:25:43 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:51.720 01:25:43 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:51.720 01:25:43 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:51.720 01:25:43 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:51.720 01:25:43 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:51.720 nvmf_trace.0 00:17:51.720 01:25:43 -- common/autotest_common.sh@811 -- # return 0 00:17:51.720 01:25:43 -- target/tls.sh@16 -- # killprocess 655076 00:17:51.720 01:25:43 -- common/autotest_common.sh@926 -- # '[' -z 655076 ']' 00:17:51.720 01:25:43 -- common/autotest_common.sh@930 -- # kill -0 655076 00:17:51.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (655076) - No such process 00:17:51.720 01:25:43 -- common/autotest_common.sh@953 -- # echo 'Process with pid 655076 is not found' 00:17:51.720 Process with pid 655076 is not found 00:17:51.720 01:25:43 -- target/tls.sh@17 -- # nvmftestfini 00:17:51.720 01:25:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:51.720 01:25:43 -- nvmf/common.sh@116 -- # sync 00:17:51.720 01:25:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:51.720 01:25:43 -- nvmf/common.sh@119 -- # set +e 00:17:51.720 01:25:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:51.720 01:25:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:51.720 rmmod nvme_tcp 00:17:51.720 rmmod nvme_fabrics 00:17:51.984 rmmod nvme_keyring 00:17:51.984 01:25:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:51.984 01:25:43 -- nvmf/common.sh@123 -- # set -e 00:17:51.984 01:25:43 -- nvmf/common.sh@124 -- # return 0 00:17:51.984 01:25:43 -- nvmf/common.sh@477 -- # '[' -n 654997 ']' 00:17:51.984 01:25:43 -- nvmf/common.sh@478 -- # killprocess 654997 00:17:51.984 01:25:43 -- common/autotest_common.sh@926 -- # '[' -z 654997 ']' 00:17:51.984 01:25:43 -- common/autotest_common.sh@930 -- # kill -0 654997 00:17:51.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (654997) - No such process 00:17:51.984 01:25:43 -- common/autotest_common.sh@953 -- # echo 'Process with pid 654997 is not found' 00:17:51.984 Process with pid 654997 is not found 00:17:51.984 01:25:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:51.985 01:25:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:51.985 01:25:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:51.985 01:25:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.985 01:25:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:51.985 01:25:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.985 01:25:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.985 01:25:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.884 01:25:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:53.884 01:25:45 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:53.884 00:17:53.884 real 1m14.642s 00:17:53.884 user 1m55.886s 00:17:53.884 sys 0m26.863s 00:17:53.884 01:25:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:53.884 01:25:45 -- common/autotest_common.sh@10 -- # set +x 00:17:53.884 ************************************ 00:17:53.884 END TEST nvmf_tls 00:17:53.884 ************************************ 00:17:53.884 01:25:45 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:53.884 01:25:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:53.884 01:25:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:53.884 01:25:45 -- common/autotest_common.sh@10 -- # set +x 00:17:53.884 ************************************ 00:17:53.884 START TEST nvmf_fips 00:17:53.884 ************************************ 00:17:53.884 01:25:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:53.884 * Looking for test storage... 00:17:53.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:53.884 01:25:45 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.884 01:25:45 -- nvmf/common.sh@7 -- # uname -s 00:17:53.884 01:25:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.884 01:25:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.884 01:25:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.884 01:25:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.884 01:25:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.884 01:25:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.884 01:25:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.884 01:25:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.884 01:25:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.884 01:25:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.884 01:25:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.884 01:25:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.884 01:25:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.884 01:25:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.884 01:25:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.884 01:25:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.884 01:25:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.884 01:25:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.884 01:25:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.884 01:25:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.884 01:25:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.884 01:25:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.884 01:25:45 -- paths/export.sh@5 -- # export PATH 00:17:53.884 01:25:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.884 01:25:45 -- nvmf/common.sh@46 -- # : 0 00:17:53.884 01:25:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:53.884 01:25:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:53.884 01:25:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:53.884 01:25:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.884 01:25:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.884 01:25:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:53.884 01:25:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:53.884 01:25:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:53.884 01:25:45 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.884 01:25:45 -- fips/fips.sh@89 -- # check_openssl_version 00:17:53.884 01:25:45 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:53.884 01:25:45 -- fips/fips.sh@85 -- # openssl version 00:17:53.884 01:25:45 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:53.884 01:25:45 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:53.884 01:25:45 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:53.884 01:25:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:53.884 01:25:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:53.884 01:25:45 -- scripts/common.sh@335 -- # IFS=.-: 00:17:53.884 01:25:45 -- scripts/common.sh@335 -- # read -ra ver1 00:17:53.884 01:25:45 -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.884 01:25:45 -- scripts/common.sh@336 -- # read -ra ver2 00:17:53.884 01:25:45 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:53.884 01:25:45 -- scripts/common.sh@339 -- # ver1_l=3 00:17:53.884 01:25:45 -- scripts/common.sh@340 -- # ver2_l=3 00:17:53.884 01:25:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:53.884 01:25:45 -- scripts/common.sh@343 -- # case "$op" in 00:17:53.884 01:25:45 -- scripts/common.sh@347 -- # : 1 00:17:53.884 01:25:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:53.884 01:25:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.884 01:25:45 -- scripts/common.sh@364 -- # decimal 3 00:17:53.884 01:25:45 -- scripts/common.sh@352 -- # local d=3 00:17:53.884 01:25:45 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:53.884 01:25:45 -- scripts/common.sh@354 -- # echo 3 00:17:53.884 01:25:45 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:53.884 01:25:45 -- scripts/common.sh@365 -- # decimal 3 00:17:53.884 01:25:45 -- scripts/common.sh@352 -- # local d=3 00:17:53.884 01:25:45 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:53.884 01:25:45 -- scripts/common.sh@354 -- # echo 3 00:17:53.884 01:25:45 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:53.885 01:25:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:53.885 01:25:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:53.885 01:25:45 -- scripts/common.sh@363 -- # (( v++ )) 00:17:53.885 01:25:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.885 01:25:45 -- scripts/common.sh@364 -- # decimal 0 00:17:53.885 01:25:45 -- scripts/common.sh@352 -- # local d=0 00:17:53.885 01:25:45 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:53.885 01:25:45 -- scripts/common.sh@354 -- # echo 0 00:17:53.885 01:25:45 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:54.143 01:25:45 -- scripts/common.sh@365 -- # decimal 0 00:17:54.143 01:25:45 -- scripts/common.sh@352 -- # local d=0 00:17:54.143 01:25:45 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:54.143 01:25:45 -- scripts/common.sh@354 -- # echo 0 00:17:54.143 01:25:45 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:54.143 01:25:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:54.143 01:25:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:54.143 01:25:45 -- scripts/common.sh@363 -- # (( v++ )) 00:17:54.143 01:25:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.143 01:25:45 -- scripts/common.sh@364 -- # decimal 9 00:17:54.143 01:25:45 -- scripts/common.sh@352 -- # local d=9 00:17:54.143 01:25:45 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:54.143 01:25:45 -- scripts/common.sh@354 -- # echo 9 00:17:54.143 01:25:45 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:54.143 01:25:45 -- scripts/common.sh@365 -- # decimal 0 00:17:54.143 01:25:45 -- scripts/common.sh@352 -- # local d=0 00:17:54.143 01:25:45 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:54.143 01:25:45 -- scripts/common.sh@354 -- # echo 0 00:17:54.143 01:25:45 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:54.143 01:25:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:54.143 01:25:45 -- scripts/common.sh@366 -- # return 0 00:17:54.143 01:25:45 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:54.143 01:25:45 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:54.143 01:25:45 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:54.143 01:25:45 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:54.143 01:25:45 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:54.143 01:25:45 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:54.143 01:25:45 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:54.143 01:25:45 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:54.143 01:25:45 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:54.143 01:25:45 -- fips/fips.sh@114 -- # build_openssl_config 00:17:54.143 01:25:45 -- fips/fips.sh@37 -- # cat 00:17:54.143 01:25:45 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:54.143 01:25:45 -- fips/fips.sh@58 -- # cat - 00:17:54.143 01:25:45 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:54.143 01:25:45 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:54.143 01:25:45 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:54.143 01:25:45 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:54.143 01:25:45 -- fips/fips.sh@117 -- # openssl list -providers 00:17:54.143 01:25:45 -- fips/fips.sh@117 -- # grep name 00:17:54.143 01:25:45 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:54.143 01:25:45 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:54.143 01:25:45 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:54.143 01:25:45 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:54.143 01:25:45 -- fips/fips.sh@128 -- # : 00:17:54.143 01:25:45 -- common/autotest_common.sh@640 -- # local es=0 00:17:54.143 01:25:45 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:54.143 01:25:45 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:54.143 01:25:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:54.143 01:25:45 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:54.143 01:25:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:54.143 01:25:45 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:54.143 01:25:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:54.143 01:25:45 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:54.143 01:25:45 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:54.143 01:25:45 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:54.143 Error setting digest 00:17:54.143 0072D1CC0A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:54.144 0072D1CC0A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:54.144 01:25:45 -- common/autotest_common.sh@643 -- # es=1 00:17:54.144 01:25:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:54.144 01:25:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:54.144 01:25:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:54.144 01:25:45 -- fips/fips.sh@131 -- # nvmftestinit 00:17:54.144 01:25:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:54.144 01:25:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.144 01:25:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:54.144 01:25:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:54.144 01:25:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:54.144 01:25:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.144 01:25:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.144 01:25:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.144 01:25:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:54.144 01:25:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:54.144 01:25:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:54.144 01:25:45 -- common/autotest_common.sh@10 -- # set +x 00:17:56.047 01:25:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:56.047 01:25:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:56.047 01:25:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:56.047 01:25:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:56.047 01:25:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:56.047 01:25:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:56.047 01:25:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:56.047 01:25:47 -- nvmf/common.sh@294 -- # net_devs=() 00:17:56.047 01:25:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:56.047 01:25:47 -- nvmf/common.sh@295 -- # e810=() 00:17:56.047 01:25:47 -- nvmf/common.sh@295 -- # local -ga e810 00:17:56.047 01:25:47 -- nvmf/common.sh@296 -- # x722=() 00:17:56.047 01:25:47 -- nvmf/common.sh@296 -- # local -ga x722 00:17:56.047 01:25:47 -- nvmf/common.sh@297 -- # mlx=() 00:17:56.047 01:25:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:56.047 01:25:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.047 01:25:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:56.047 01:25:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:56.047 01:25:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:56.047 01:25:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.047 01:25:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:56.047 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:56.047 01:25:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.047 01:25:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:56.047 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:56.047 01:25:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:56.047 01:25:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:56.047 01:25:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.047 01:25:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.047 01:25:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.047 01:25:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.048 01:25:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:56.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:56.048 01:25:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.048 01:25:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.048 01:25:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.048 01:25:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.048 01:25:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.048 01:25:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:56.048 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:56.048 01:25:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.048 01:25:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:56.048 01:25:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:56.048 01:25:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:56.048 01:25:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:56.048 01:25:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:56.048 01:25:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.048 01:25:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.048 01:25:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.048 01:25:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:56.048 01:25:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.048 01:25:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.048 01:25:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:56.048 01:25:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.048 01:25:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.048 01:25:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:56.048 01:25:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:56.048 01:25:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.048 01:25:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.306 01:25:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.306 01:25:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.306 01:25:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:56.306 01:25:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.306 01:25:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.306 01:25:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.306 01:25:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:56.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:17:56.306 00:17:56.306 --- 10.0.0.2 ping statistics --- 00:17:56.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.306 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:17:56.306 01:25:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:17:56.306 00:17:56.306 --- 10.0.0.1 ping statistics --- 00:17:56.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.306 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:56.306 01:25:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.306 01:25:47 -- nvmf/common.sh@410 -- # return 0 00:17:56.306 01:25:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:56.306 01:25:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.306 01:25:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:56.306 01:25:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:56.306 01:25:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.306 01:25:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:56.306 01:25:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:56.306 01:25:47 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:56.306 01:25:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:56.306 01:25:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:56.306 01:25:47 -- common/autotest_common.sh@10 -- # set +x 00:17:56.306 01:25:47 -- nvmf/common.sh@469 -- # nvmfpid=658493 00:17:56.306 01:25:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:56.306 01:25:47 -- nvmf/common.sh@470 -- # waitforlisten 658493 00:17:56.306 01:25:47 -- common/autotest_common.sh@819 -- # '[' -z 658493 ']' 00:17:56.306 01:25:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.306 01:25:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:56.306 01:25:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.306 01:25:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:56.306 01:25:47 -- common/autotest_common.sh@10 -- # set +x 00:17:56.306 [2024-07-27 01:25:48.026515] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:56.306 [2024-07-27 01:25:48.026592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.306 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.565 [2024-07-27 01:25:48.097003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.565 [2024-07-27 01:25:48.210668] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:56.565 [2024-07-27 01:25:48.210839] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.565 [2024-07-27 01:25:48.210859] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.565 [2024-07-27 01:25:48.210873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.565 [2024-07-27 01:25:48.210902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.505 01:25:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:57.505 01:25:48 -- common/autotest_common.sh@852 -- # return 0 00:17:57.505 01:25:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:57.505 01:25:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:57.505 01:25:48 -- common/autotest_common.sh@10 -- # set +x 00:17:57.505 01:25:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.505 01:25:48 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:57.505 01:25:48 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:57.505 01:25:48 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:57.505 01:25:48 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:57.505 01:25:48 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:57.505 01:25:48 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:57.505 01:25:48 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:57.505 01:25:48 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.505 [2024-07-27 01:25:49.203438] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.505 [2024-07-27 01:25:49.219424] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:57.505 [2024-07-27 01:25:49.219652] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.505 malloc0 00:17:57.763 01:25:49 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.763 01:25:49 -- fips/fips.sh@148 -- # bdevperf_pid=658655 00:17:57.763 01:25:49 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.763 01:25:49 -- fips/fips.sh@149 -- # waitforlisten 658655 /var/tmp/bdevperf.sock 00:17:57.763 01:25:49 -- common/autotest_common.sh@819 -- # '[' -z 658655 ']' 00:17:57.763 01:25:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.763 01:25:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:57.763 01:25:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.763 01:25:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:57.763 01:25:49 -- common/autotest_common.sh@10 -- # set +x 00:17:57.763 [2024-07-27 01:25:49.331489] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:57.763 [2024-07-27 01:25:49.331575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658655 ] 00:17:57.763 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.763 [2024-07-27 01:25:49.389904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.763 [2024-07-27 01:25:49.495222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.700 01:25:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:58.700 01:25:50 -- common/autotest_common.sh@852 -- # return 0 00:17:58.700 01:25:50 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:58.958 [2024-07-27 01:25:50.477159] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.958 TLSTESTn1 00:17:58.958 01:25:50 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.958 Running I/O for 10 seconds... 00:18:11.161 00:18:11.161 Latency(us) 00:18:11.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.161 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:11.161 Verification LBA range: start 0x0 length 0x2000 00:18:11.161 TLSTESTn1 : 10.03 2075.95 8.11 0.00 0.00 61560.36 8252.68 71846.87 00:18:11.161 =================================================================================================================== 00:18:11.161 Total : 2075.95 8.11 0.00 0.00 61560.36 8252.68 71846.87 00:18:11.161 0 00:18:11.161 01:26:00 -- fips/fips.sh@1 -- # cleanup 00:18:11.161 01:26:00 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:11.161 01:26:00 -- common/autotest_common.sh@796 -- # type=--id 00:18:11.161 01:26:00 -- common/autotest_common.sh@797 -- # id=0 00:18:11.161 01:26:00 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:11.161 01:26:00 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:11.161 01:26:00 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:11.161 01:26:00 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:11.161 01:26:00 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:11.161 01:26:00 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:11.161 nvmf_trace.0 00:18:11.161 01:26:00 -- common/autotest_common.sh@811 -- # return 0 00:18:11.161 01:26:00 -- fips/fips.sh@16 -- # killprocess 658655 00:18:11.161 01:26:00 -- common/autotest_common.sh@926 -- # '[' -z 658655 ']' 00:18:11.161 01:26:00 -- common/autotest_common.sh@930 -- # kill -0 658655 00:18:11.161 01:26:00 -- common/autotest_common.sh@931 -- # uname 00:18:11.161 01:26:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:11.161 01:26:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 658655 00:18:11.161 01:26:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:11.161 01:26:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:11.161 01:26:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 658655' 00:18:11.161 killing process with pid 658655 00:18:11.161 01:26:00 -- common/autotest_common.sh@945 -- # kill 658655 00:18:11.161 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.161 00:18:11.161 Latency(us) 00:18:11.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.161 =================================================================================================================== 00:18:11.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.161 01:26:00 -- common/autotest_common.sh@950 -- # wait 658655 00:18:11.161 01:26:01 -- fips/fips.sh@17 -- # nvmftestfini 00:18:11.161 01:26:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:11.161 01:26:01 -- nvmf/common.sh@116 -- # sync 00:18:11.161 01:26:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:11.161 01:26:01 -- nvmf/common.sh@119 -- # set +e 00:18:11.161 01:26:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:11.161 01:26:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:11.161 rmmod nvme_tcp 00:18:11.161 rmmod nvme_fabrics 00:18:11.161 rmmod nvme_keyring 00:18:11.161 01:26:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:11.161 01:26:01 -- nvmf/common.sh@123 -- # set -e 00:18:11.161 01:26:01 -- nvmf/common.sh@124 -- # return 0 00:18:11.161 01:26:01 -- nvmf/common.sh@477 -- # '[' -n 658493 ']' 00:18:11.161 01:26:01 -- nvmf/common.sh@478 -- # killprocess 658493 00:18:11.161 01:26:01 -- common/autotest_common.sh@926 -- # '[' -z 658493 ']' 00:18:11.161 01:26:01 -- common/autotest_common.sh@930 -- # kill -0 658493 00:18:11.161 01:26:01 -- common/autotest_common.sh@931 -- # uname 00:18:11.161 01:26:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:11.161 01:26:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 658493 00:18:11.161 01:26:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:11.161 01:26:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:11.161 01:26:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 658493' 00:18:11.161 killing process with pid 658493 00:18:11.161 01:26:01 -- common/autotest_common.sh@945 -- # kill 658493 00:18:11.161 01:26:01 -- common/autotest_common.sh@950 -- # wait 658493 00:18:11.161 01:26:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:11.161 01:26:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:11.161 01:26:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:11.161 01:26:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.161 01:26:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:11.161 01:26:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.161 01:26:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.161 01:26:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.098 01:26:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:12.098 01:26:03 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:12.098 00:18:12.098 real 0m17.981s 00:18:12.098 user 0m22.150s 00:18:12.098 sys 0m7.175s 00:18:12.098 01:26:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.098 01:26:03 -- common/autotest_common.sh@10 -- # set +x 00:18:12.098 ************************************ 00:18:12.098 END TEST nvmf_fips 00:18:12.098 ************************************ 00:18:12.098 01:26:03 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:12.098 01:26:03 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:12.098 01:26:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:12.098 01:26:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:12.098 01:26:03 -- common/autotest_common.sh@10 -- # set +x 00:18:12.098 ************************************ 00:18:12.098 START TEST nvmf_fuzz 00:18:12.098 ************************************ 00:18:12.098 01:26:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:12.098 * Looking for test storage... 00:18:12.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.098 01:26:03 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.098 01:26:03 -- nvmf/common.sh@7 -- # uname -s 00:18:12.098 01:26:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.098 01:26:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.098 01:26:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.098 01:26:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.098 01:26:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.098 01:26:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.098 01:26:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.098 01:26:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.098 01:26:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.098 01:26:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.098 01:26:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.098 01:26:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.098 01:26:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.098 01:26:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.098 01:26:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.098 01:26:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.098 01:26:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.098 01:26:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.098 01:26:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.098 01:26:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.098 01:26:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.098 01:26:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.098 01:26:03 -- paths/export.sh@5 -- # export PATH 00:18:12.098 01:26:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.098 01:26:03 -- nvmf/common.sh@46 -- # : 0 00:18:12.098 01:26:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:12.098 01:26:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:12.098 01:26:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:12.098 01:26:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.098 01:26:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.098 01:26:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:12.098 01:26:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:12.098 01:26:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:12.098 01:26:03 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:12.098 01:26:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:12.098 01:26:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.098 01:26:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:12.098 01:26:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:12.098 01:26:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:12.098 01:26:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.098 01:26:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.098 01:26:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.098 01:26:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:12.098 01:26:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:12.098 01:26:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:12.098 01:26:03 -- common/autotest_common.sh@10 -- # set +x 00:18:13.998 01:26:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:13.998 01:26:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:13.998 01:26:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:13.998 01:26:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:13.998 01:26:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:13.998 01:26:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:13.998 01:26:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:13.998 01:26:05 -- nvmf/common.sh@294 -- # net_devs=() 00:18:13.998 01:26:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:13.998 01:26:05 -- nvmf/common.sh@295 -- # e810=() 00:18:13.998 01:26:05 -- nvmf/common.sh@295 -- # local -ga e810 00:18:13.998 01:26:05 -- nvmf/common.sh@296 -- # x722=() 00:18:13.998 01:26:05 -- nvmf/common.sh@296 -- # local -ga x722 00:18:13.998 01:26:05 -- nvmf/common.sh@297 -- # mlx=() 00:18:13.998 01:26:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:13.998 01:26:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.998 01:26:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:13.998 01:26:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:13.998 01:26:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:13.998 01:26:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:13.998 01:26:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:13.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:13.998 01:26:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:13.998 01:26:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:13.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:13.998 01:26:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:13.998 01:26:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:13.998 01:26:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.998 01:26:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:13.998 01:26:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.998 01:26:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:13.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:13.998 01:26:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.998 01:26:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:13.998 01:26:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.998 01:26:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:13.998 01:26:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.998 01:26:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:13.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:13.998 01:26:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.998 01:26:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:13.998 01:26:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:13.998 01:26:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:13.998 01:26:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.998 01:26:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.998 01:26:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.998 01:26:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:13.998 01:26:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.998 01:26:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.998 01:26:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:13.998 01:26:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.998 01:26:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.998 01:26:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:13.998 01:26:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:13.998 01:26:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.998 01:26:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.998 01:26:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.998 01:26:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.998 01:26:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:13.998 01:26:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.998 01:26:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.998 01:26:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.998 01:26:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:13.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:18:13.998 00:18:13.998 --- 10.0.0.2 ping statistics --- 00:18:13.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.998 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:18:13.998 01:26:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:18:13.998 00:18:13.998 --- 10.0.0.1 ping statistics --- 00:18:13.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.998 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:18:13.998 01:26:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.998 01:26:05 -- nvmf/common.sh@410 -- # return 0 00:18:13.998 01:26:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:13.998 01:26:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.998 01:26:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:13.998 01:26:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.998 01:26:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:13.998 01:26:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:13.999 01:26:05 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=661987 00:18:13.999 01:26:05 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:13.999 01:26:05 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:13.999 01:26:05 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 661987 00:18:13.999 01:26:05 -- common/autotest_common.sh@819 -- # '[' -z 661987 ']' 00:18:13.999 01:26:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.999 01:26:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:13.999 01:26:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.999 01:26:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:13.999 01:26:05 -- common/autotest_common.sh@10 -- # set +x 00:18:14.935 01:26:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:14.935 01:26:06 -- common/autotest_common.sh@852 -- # return 0 00:18:14.935 01:26:06 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:14.935 01:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.935 01:26:06 -- common/autotest_common.sh@10 -- # set +x 00:18:14.935 01:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.935 01:26:06 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:14.935 01:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.935 01:26:06 -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 Malloc0 00:18:15.195 01:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.195 01:26:06 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:15.195 01:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.195 01:26:06 -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 01:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.195 01:26:06 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.195 01:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.195 01:26:06 -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 01:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.195 01:26:06 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.195 01:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.195 01:26:06 -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 01:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.195 01:26:06 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:15.195 01:26:06 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:47.319 Fuzzing completed. Shutting down the fuzz application 00:18:47.319 00:18:47.319 Dumping successful admin opcodes: 00:18:47.319 8, 9, 10, 24, 00:18:47.319 Dumping successful io opcodes: 00:18:47.319 0, 9, 00:18:47.319 NS: 0x200003aeff00 I/O qp, Total commands completed: 438388, total successful commands: 2559, random_seed: 4034082624 00:18:47.319 NS: 0x200003aeff00 admin qp, Total commands completed: 55120, total successful commands: 441, random_seed: 3593211840 00:18:47.319 01:26:37 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:47.319 Fuzzing completed. Shutting down the fuzz application 00:18:47.319 00:18:47.319 Dumping successful admin opcodes: 00:18:47.319 24, 00:18:47.319 Dumping successful io opcodes: 00:18:47.319 00:18:47.319 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1352344432 00:18:47.319 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1352470540 00:18:47.319 01:26:38 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.319 01:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:47.319 01:26:38 -- common/autotest_common.sh@10 -- # set +x 00:18:47.319 01:26:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:47.319 01:26:38 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:47.319 01:26:38 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:47.319 01:26:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:47.319 01:26:38 -- nvmf/common.sh@116 -- # sync 00:18:47.319 01:26:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:47.319 01:26:38 -- nvmf/common.sh@119 -- # set +e 00:18:47.319 01:26:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:47.319 01:26:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:47.319 rmmod nvme_tcp 00:18:47.319 rmmod nvme_fabrics 00:18:47.319 rmmod nvme_keyring 00:18:47.319 01:26:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:47.319 01:26:38 -- nvmf/common.sh@123 -- # set -e 00:18:47.319 01:26:38 -- nvmf/common.sh@124 -- # return 0 00:18:47.319 01:26:38 -- nvmf/common.sh@477 -- # '[' -n 661987 ']' 00:18:47.319 01:26:38 -- nvmf/common.sh@478 -- # killprocess 661987 00:18:47.319 01:26:38 -- common/autotest_common.sh@926 -- # '[' -z 661987 ']' 00:18:47.319 01:26:38 -- common/autotest_common.sh@930 -- # kill -0 661987 00:18:47.319 01:26:38 -- common/autotest_common.sh@931 -- # uname 00:18:47.319 01:26:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:47.319 01:26:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 661987 00:18:47.319 01:26:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:47.319 01:26:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:47.319 01:26:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 661987' 00:18:47.319 killing process with pid 661987 00:18:47.319 01:26:38 -- common/autotest_common.sh@945 -- # kill 661987 00:18:47.319 01:26:38 -- common/autotest_common.sh@950 -- # wait 661987 00:18:47.319 01:26:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:47.319 01:26:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:47.319 01:26:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:47.319 01:26:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.319 01:26:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:47.319 01:26:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.319 01:26:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.319 01:26:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.856 01:26:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:49.856 01:26:41 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:18:49.856 00:18:49.856 real 0m37.486s 00:18:49.856 user 0m51.252s 00:18:49.856 sys 0m15.335s 00:18:49.856 01:26:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.856 01:26:41 -- common/autotest_common.sh@10 -- # set +x 00:18:49.856 ************************************ 00:18:49.856 END TEST nvmf_fuzz 00:18:49.856 ************************************ 00:18:49.856 01:26:41 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:49.856 01:26:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:49.856 01:26:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:49.856 01:26:41 -- common/autotest_common.sh@10 -- # set +x 00:18:49.856 ************************************ 00:18:49.856 START TEST nvmf_multiconnection 00:18:49.856 ************************************ 00:18:49.856 01:26:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:49.856 * Looking for test storage... 00:18:49.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.856 01:26:41 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.856 01:26:41 -- nvmf/common.sh@7 -- # uname -s 00:18:49.856 01:26:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.856 01:26:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.856 01:26:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.856 01:26:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.856 01:26:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.856 01:26:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.856 01:26:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.856 01:26:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.856 01:26:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.856 01:26:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.856 01:26:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.856 01:26:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.856 01:26:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.856 01:26:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.856 01:26:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.856 01:26:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.856 01:26:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.856 01:26:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.856 01:26:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.856 01:26:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.856 01:26:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.857 01:26:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.857 01:26:41 -- paths/export.sh@5 -- # export PATH 00:18:49.857 01:26:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.857 01:26:41 -- nvmf/common.sh@46 -- # : 0 00:18:49.857 01:26:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:49.857 01:26:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:49.857 01:26:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:49.857 01:26:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.857 01:26:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.857 01:26:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:49.857 01:26:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:49.857 01:26:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:49.857 01:26:41 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:49.857 01:26:41 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:49.857 01:26:41 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:49.857 01:26:41 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:49.857 01:26:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:49.857 01:26:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.857 01:26:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:49.857 01:26:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:49.857 01:26:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:49.857 01:26:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.857 01:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.857 01:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.857 01:26:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:49.857 01:26:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:49.857 01:26:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:49.857 01:26:41 -- common/autotest_common.sh@10 -- # set +x 00:18:51.760 01:26:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:51.760 01:26:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:51.760 01:26:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:51.760 01:26:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:51.760 01:26:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:51.760 01:26:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:51.760 01:26:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:51.760 01:26:43 -- nvmf/common.sh@294 -- # net_devs=() 00:18:51.760 01:26:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:51.760 01:26:43 -- nvmf/common.sh@295 -- # e810=() 00:18:51.760 01:26:43 -- nvmf/common.sh@295 -- # local -ga e810 00:18:51.760 01:26:43 -- nvmf/common.sh@296 -- # x722=() 00:18:51.760 01:26:43 -- nvmf/common.sh@296 -- # local -ga x722 00:18:51.760 01:26:43 -- nvmf/common.sh@297 -- # mlx=() 00:18:51.760 01:26:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:51.760 01:26:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.760 01:26:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:51.760 01:26:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:51.760 01:26:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:51.760 01:26:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:51.760 01:26:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:51.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:51.760 01:26:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:51.760 01:26:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:51.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:51.760 01:26:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:51.760 01:26:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:51.760 01:26:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.760 01:26:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:51.760 01:26:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.760 01:26:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:51.760 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:51.760 01:26:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.760 01:26:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:51.760 01:26:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.760 01:26:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:51.760 01:26:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.760 01:26:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:51.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:51.760 01:26:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.760 01:26:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:51.760 01:26:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:51.760 01:26:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:51.760 01:26:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:51.760 01:26:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.760 01:26:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.760 01:26:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.760 01:26:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:51.760 01:26:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.760 01:26:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.760 01:26:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:51.760 01:26:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.760 01:26:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.760 01:26:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:51.760 01:26:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:51.760 01:26:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.760 01:26:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.760 01:26:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.760 01:26:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.760 01:26:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:51.760 01:26:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.760 01:26:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.760 01:26:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.760 01:26:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:51.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:18:51.760 00:18:51.760 --- 10.0.0.2 ping statistics --- 00:18:51.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.760 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:51.760 01:26:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:18:51.760 00:18:51.761 --- 10.0.0.1 ping statistics --- 00:18:51.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.761 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:18:51.761 01:26:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.761 01:26:43 -- nvmf/common.sh@410 -- # return 0 00:18:51.761 01:26:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:51.761 01:26:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.761 01:26:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:51.761 01:26:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:51.761 01:26:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.761 01:26:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:51.761 01:26:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:51.761 01:26:43 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:51.761 01:26:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:51.761 01:26:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:51.761 01:26:43 -- common/autotest_common.sh@10 -- # set +x 00:18:51.761 01:26:43 -- nvmf/common.sh@469 -- # nvmfpid=667970 00:18:51.761 01:26:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:51.761 01:26:43 -- nvmf/common.sh@470 -- # waitforlisten 667970 00:18:51.761 01:26:43 -- common/autotest_common.sh@819 -- # '[' -z 667970 ']' 00:18:51.761 01:26:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.761 01:26:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:51.761 01:26:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.761 01:26:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:51.761 01:26:43 -- common/autotest_common.sh@10 -- # set +x 00:18:51.761 [2024-07-27 01:26:43.431863] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:51.761 [2024-07-27 01:26:43.431940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.761 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.761 [2024-07-27 01:26:43.509381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.020 [2024-07-27 01:26:43.630778] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:52.020 [2024-07-27 01:26:43.630938] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.020 [2024-07-27 01:26:43.630956] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.020 [2024-07-27 01:26:43.630968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.020 [2024-07-27 01:26:43.631031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.020 [2024-07-27 01:26:43.631062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.020 [2024-07-27 01:26:43.631122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.020 [2024-07-27 01:26:43.631126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.954 01:26:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:52.954 01:26:44 -- common/autotest_common.sh@852 -- # return 0 00:18:52.954 01:26:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:52.954 01:26:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 01:26:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.954 01:26:44 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:52.954 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 [2024-07-27 01:26:44.430565] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.954 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.954 01:26:44 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:52.954 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.954 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:52.954 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 Malloc1 00:18:52.954 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.954 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:52.954 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.954 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:52.954 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.954 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.954 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 [2024-07-27 01:26:44.487317] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.954 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.954 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.954 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:52.954 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 Malloc2 00:18:52.954 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.954 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:52.954 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.954 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:52.954 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.954 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:52.954 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.954 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.954 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.955 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 Malloc3 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.955 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 Malloc4 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.955 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 Malloc5 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.955 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 Malloc6 00:18:52.955 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.955 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:52.955 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.955 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.215 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 Malloc7 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.215 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 Malloc8 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.215 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 Malloc9 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.215 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:53.215 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.215 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.216 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:53.216 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.216 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.216 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.216 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:53.216 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.216 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.216 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.216 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.216 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:53.216 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.216 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.216 Malloc10 00:18:53.216 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.216 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:53.216 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.216 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.216 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.216 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:53.216 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.216 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.216 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.216 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:53.216 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.216 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.216 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.216 01:26:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.216 01:26:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:53.216 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.216 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.216 Malloc11 00:18:53.216 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.216 01:26:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:53.216 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.216 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.216 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.216 01:26:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:53.216 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.216 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.475 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.475 01:26:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:53.475 01:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.475 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:18:53.475 01:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.475 01:26:44 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:53.475 01:26:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.475 01:26:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:54.041 01:26:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:54.041 01:26:45 -- common/autotest_common.sh@1177 -- # local i=0 00:18:54.041 01:26:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.042 01:26:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:54.042 01:26:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:55.947 01:26:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:55.947 01:26:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:55.947 01:26:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:18:55.947 01:26:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:55.947 01:26:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:55.947 01:26:47 -- common/autotest_common.sh@1187 -- # return 0 00:18:55.947 01:26:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:55.947 01:26:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:56.884 01:26:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:56.884 01:26:48 -- common/autotest_common.sh@1177 -- # local i=0 00:18:56.884 01:26:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.884 01:26:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:56.884 01:26:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:58.788 01:26:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:58.788 01:26:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:58.788 01:26:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:18:58.788 01:26:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:58.788 01:26:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.788 01:26:50 -- common/autotest_common.sh@1187 -- # return 0 00:18:58.788 01:26:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.788 01:26:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:59.355 01:26:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:59.355 01:26:50 -- common/autotest_common.sh@1177 -- # local i=0 00:18:59.355 01:26:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.355 01:26:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:59.355 01:26:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:01.260 01:26:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:01.260 01:26:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:01.260 01:26:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:19:01.260 01:26:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:01.260 01:26:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.260 01:26:53 -- common/autotest_common.sh@1187 -- # return 0 00:19:01.260 01:26:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.260 01:26:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:19:02.194 01:26:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:19:02.194 01:26:53 -- common/autotest_common.sh@1177 -- # local i=0 00:19:02.194 01:26:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.194 01:26:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:02.194 01:26:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:04.121 01:26:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:04.121 01:26:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:04.121 01:26:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:19:04.121 01:26:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:04.121 01:26:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.121 01:26:55 -- common/autotest_common.sh@1187 -- # return 0 00:19:04.121 01:26:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.121 01:26:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:19:04.689 01:26:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:19:04.689 01:26:56 -- common/autotest_common.sh@1177 -- # local i=0 00:19:04.689 01:26:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.689 01:26:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:04.689 01:26:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:07.225 01:26:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:07.225 01:26:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:07.225 01:26:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:19:07.225 01:26:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:07.225 01:26:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.225 01:26:58 -- common/autotest_common.sh@1187 -- # return 0 00:19:07.226 01:26:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.226 01:26:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:19:07.484 01:26:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:07.484 01:26:59 -- common/autotest_common.sh@1177 -- # local i=0 00:19:07.484 01:26:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.484 01:26:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:07.484 01:26:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:09.386 01:27:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:09.386 01:27:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:09.386 01:27:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:19:09.386 01:27:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:09.386 01:27:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.386 01:27:01 -- common/autotest_common.sh@1187 -- # return 0 00:19:09.386 01:27:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.386 01:27:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:19:10.318 01:27:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:10.318 01:27:01 -- common/autotest_common.sh@1177 -- # local i=0 00:19:10.318 01:27:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:10.318 01:27:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:10.318 01:27:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:12.852 01:27:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:12.852 01:27:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:12.852 01:27:03 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:19:12.852 01:27:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:12.852 01:27:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:12.852 01:27:04 -- common/autotest_common.sh@1187 -- # return 0 00:19:12.852 01:27:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:12.852 01:27:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:19:13.110 01:27:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:13.110 01:27:04 -- common/autotest_common.sh@1177 -- # local i=0 00:19:13.110 01:27:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:13.110 01:27:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:13.110 01:27:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:15.018 01:27:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:15.018 01:27:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:15.018 01:27:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:19:15.018 01:27:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:15.018 01:27:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:15.018 01:27:06 -- common/autotest_common.sh@1187 -- # return 0 00:19:15.018 01:27:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.018 01:27:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:15.956 01:27:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:15.956 01:27:07 -- common/autotest_common.sh@1177 -- # local i=0 00:19:15.956 01:27:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:15.956 01:27:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:15.956 01:27:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:18.486 01:27:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:18.486 01:27:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:18.486 01:27:09 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:19:18.486 01:27:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:18.486 01:27:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:18.486 01:27:09 -- common/autotest_common.sh@1187 -- # return 0 00:19:18.486 01:27:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.486 01:27:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:19.055 01:27:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:19.055 01:27:10 -- common/autotest_common.sh@1177 -- # local i=0 00:19:19.055 01:27:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:19.055 01:27:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:19.055 01:27:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:20.955 01:27:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:20.955 01:27:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:20.955 01:27:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:19:20.955 01:27:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:20.955 01:27:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:20.955 01:27:12 -- common/autotest_common.sh@1187 -- # return 0 00:19:20.955 01:27:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:20.955 01:27:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:21.892 01:27:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:21.892 01:27:13 -- common/autotest_common.sh@1177 -- # local i=0 00:19:21.892 01:27:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:21.892 01:27:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:21.892 01:27:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:23.826 01:27:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:23.826 01:27:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:23.826 01:27:15 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:19:23.826 01:27:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:23.826 01:27:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:23.826 01:27:15 -- common/autotest_common.sh@1187 -- # return 0 00:19:23.826 01:27:15 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:23.826 [global] 00:19:23.826 thread=1 00:19:23.826 invalidate=1 00:19:23.826 rw=read 00:19:23.826 time_based=1 00:19:23.826 runtime=10 00:19:23.826 ioengine=libaio 00:19:23.826 direct=1 00:19:23.826 bs=262144 00:19:23.826 iodepth=64 00:19:23.826 norandommap=1 00:19:23.826 numjobs=1 00:19:23.826 00:19:23.826 [job0] 00:19:23.826 filename=/dev/nvme0n1 00:19:23.826 [job1] 00:19:23.826 filename=/dev/nvme10n1 00:19:23.826 [job2] 00:19:23.826 filename=/dev/nvme1n1 00:19:23.826 [job3] 00:19:23.826 filename=/dev/nvme2n1 00:19:23.826 [job4] 00:19:23.826 filename=/dev/nvme3n1 00:19:23.826 [job5] 00:19:23.826 filename=/dev/nvme4n1 00:19:23.826 [job6] 00:19:23.826 filename=/dev/nvme5n1 00:19:23.826 [job7] 00:19:23.826 filename=/dev/nvme6n1 00:19:23.826 [job8] 00:19:23.826 filename=/dev/nvme7n1 00:19:23.826 [job9] 00:19:23.826 filename=/dev/nvme8n1 00:19:23.826 [job10] 00:19:23.826 filename=/dev/nvme9n1 00:19:24.084 Could not set queue depth (nvme0n1) 00:19:24.084 Could not set queue depth (nvme10n1) 00:19:24.084 Could not set queue depth (nvme1n1) 00:19:24.084 Could not set queue depth (nvme2n1) 00:19:24.084 Could not set queue depth (nvme3n1) 00:19:24.084 Could not set queue depth (nvme4n1) 00:19:24.084 Could not set queue depth (nvme5n1) 00:19:24.084 Could not set queue depth (nvme6n1) 00:19:24.084 Could not set queue depth (nvme7n1) 00:19:24.084 Could not set queue depth (nvme8n1) 00:19:24.084 Could not set queue depth (nvme9n1) 00:19:24.084 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.084 fio-3.35 00:19:24.084 Starting 11 threads 00:19:36.293 00:19:36.293 job0: (groupid=0, jobs=1): err= 0: pid=673031: Sat Jul 27 01:27:26 2024 00:19:36.293 read: IOPS=341, BW=85.4MiB/s (89.5MB/s)(855MiB/10015msec) 00:19:36.293 slat (usec): min=9, max=302401, avg=2911.69, stdev=11058.66 00:19:36.293 clat (msec): min=14, max=734, avg=184.34, stdev=128.08 00:19:36.293 lat (msec): min=19, max=795, avg=187.25, stdev=130.15 00:19:36.293 clat percentiles (msec): 00:19:36.293 | 1.00th=[ 30], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 69], 00:19:36.293 | 30.00th=[ 96], 40.00th=[ 118], 50.00th=[ 159], 60.00th=[ 188], 00:19:36.293 | 70.00th=[ 236], 80.00th=[ 275], 90.00th=[ 368], 95.00th=[ 418], 00:19:36.293 | 99.00th=[ 567], 99.50th=[ 651], 99.90th=[ 701], 99.95th=[ 735], 00:19:36.293 | 99.99th=[ 735] 00:19:36.293 bw ( KiB/s): min=25088, max=204288, per=5.78%, avg=85944.05, stdev=55519.73, samples=20 00:19:36.293 iops : min= 98, max= 798, avg=335.65, stdev=216.84, samples=20 00:19:36.293 lat (msec) : 20=0.32%, 50=11.96%, 100=19.64%, 250=41.04%, 500=25.17% 00:19:36.293 lat (msec) : 750=1.87% 00:19:36.293 cpu : usr=0.17%, sys=1.32%, ctx=721, majf=0, minf=4097 00:19:36.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.293 issued rwts: total=3421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.293 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.293 job1: (groupid=0, jobs=1): err= 0: pid=673062: Sat Jul 27 01:27:26 2024 00:19:36.293 read: IOPS=659, BW=165MiB/s (173MB/s)(1672MiB/10134msec) 00:19:36.293 slat (usec): min=9, max=182718, avg=1326.90, stdev=4808.01 00:19:36.293 clat (msec): min=9, max=346, avg=95.60, stdev=48.98 00:19:36.293 lat (msec): min=9, max=373, avg=96.93, stdev=49.68 00:19:36.293 clat percentiles (msec): 00:19:36.293 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 57], 00:19:36.293 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 77], 60.00th=[ 88], 00:19:36.293 | 70.00th=[ 112], 80.00th=[ 146], 90.00th=[ 165], 95.00th=[ 182], 00:19:36.293 | 99.00th=[ 257], 99.50th=[ 271], 99.90th=[ 334], 99.95th=[ 334], 00:19:36.293 | 99.99th=[ 347] 00:19:36.293 bw ( KiB/s): min=87552, max=295936, per=11.41%, avg=169545.75, stdev=66664.98, samples=20 00:19:36.293 iops : min= 342, max= 1156, avg=662.25, stdev=260.40, samples=20 00:19:36.293 lat (msec) : 10=0.04%, 20=0.36%, 50=8.58%, 100=58.79%, 250=31.00% 00:19:36.293 lat (msec) : 500=1.23% 00:19:36.293 cpu : usr=0.40%, sys=2.14%, ctx=1168, majf=0, minf=4097 00:19:36.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.293 issued rwts: total=6687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.293 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.293 job2: (groupid=0, jobs=1): err= 0: pid=673102: Sat Jul 27 01:27:26 2024 00:19:36.293 read: IOPS=445, BW=111MiB/s (117MB/s)(1131MiB/10143msec) 00:19:36.293 slat (usec): min=9, max=285863, avg=1515.84, stdev=9776.26 00:19:36.293 clat (usec): min=1024, max=721282, avg=141907.11, stdev=129505.95 00:19:36.293 lat (usec): min=1043, max=827630, avg=143422.94, stdev=131269.25 00:19:36.293 clat percentiles (msec): 00:19:36.293 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 32], 00:19:36.293 | 30.00th=[ 42], 40.00th=[ 53], 50.00th=[ 99], 60.00th=[ 159], 00:19:36.293 | 70.00th=[ 201], 80.00th=[ 247], 90.00th=[ 330], 95.00th=[ 397], 00:19:36.293 | 99.00th=[ 498], 99.50th=[ 634], 99.90th=[ 718], 99.95th=[ 718], 00:19:36.293 | 99.99th=[ 718] 00:19:36.293 bw ( KiB/s): min=31232, max=314880, per=7.68%, avg=114134.15, stdev=86789.64, samples=20 00:19:36.293 iops : min= 122, max= 1230, avg=445.80, stdev=339.04, samples=20 00:19:36.293 lat (msec) : 2=0.42%, 4=1.17%, 10=3.80%, 20=6.06%, 50=26.35% 00:19:36.293 lat (msec) : 100=12.51%, 250=30.20%, 500=18.62%, 750=0.86% 00:19:36.293 cpu : usr=0.24%, sys=1.63%, ctx=1158, majf=0, minf=4097 00:19:36.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.293 issued rwts: total=4523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.293 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.293 job3: (groupid=0, jobs=1): err= 0: pid=673114: Sat Jul 27 01:27:26 2024 00:19:36.293 read: IOPS=290, BW=72.5MiB/s (76.1MB/s)(736MiB/10149msec) 00:19:36.293 slat (usec): min=14, max=229741, avg=3206.04, stdev=12143.45 00:19:36.293 clat (msec): min=23, max=795, avg=217.17, stdev=108.94 00:19:36.293 lat (msec): min=23, max=795, avg=220.38, stdev=110.71 00:19:36.293 clat percentiles (msec): 00:19:36.293 | 1.00th=[ 40], 5.00th=[ 93], 10.00th=[ 114], 20.00th=[ 134], 00:19:36.293 | 30.00th=[ 155], 40.00th=[ 169], 50.00th=[ 188], 60.00th=[ 222], 00:19:36.293 | 70.00th=[ 249], 80.00th=[ 279], 90.00th=[ 363], 95.00th=[ 414], 00:19:36.293 | 99.00th=[ 617], 99.50th=[ 651], 99.90th=[ 735], 99.95th=[ 768], 00:19:36.293 | 99.99th=[ 793] 00:19:36.293 bw ( KiB/s): min=26112, max=166912, per=4.96%, avg=73736.45, stdev=35921.26, samples=20 00:19:36.293 iops : min= 102, max= 652, avg=288.00, stdev=140.33, samples=20 00:19:36.293 lat (msec) : 50=2.51%, 100=3.77%, 250=64.58%, 500=26.66%, 750=2.41% 00:19:36.293 lat (msec) : 1000=0.07% 00:19:36.293 cpu : usr=0.27%, sys=1.08%, ctx=642, majf=0, minf=4097 00:19:36.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.293 issued rwts: total=2945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.293 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.293 job4: (groupid=0, jobs=1): err= 0: pid=673115: Sat Jul 27 01:27:26 2024 00:19:36.293 read: IOPS=922, BW=231MiB/s (242MB/s)(2325MiB/10086msec) 00:19:36.293 slat (usec): min=13, max=44348, avg=1072.20, stdev=3021.64 00:19:36.293 clat (msec): min=23, max=213, avg=68.28, stdev=29.02 00:19:36.293 lat (msec): min=27, max=213, avg=69.36, stdev=29.43 00:19:36.293 clat percentiles (msec): 00:19:36.293 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 40], 00:19:36.293 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 70], 00:19:36.293 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 109], 95.00th=[ 123], 00:19:36.293 | 99.00th=[ 169], 99.50th=[ 186], 99.90th=[ 197], 99.95th=[ 205], 00:19:36.293 | 99.99th=[ 213] 00:19:36.293 bw ( KiB/s): min=115200, max=433664, per=15.90%, avg=236404.20, stdev=87044.62, samples=20 00:19:36.293 iops : min= 450, max= 1694, avg=923.40, stdev=340.06, samples=20 00:19:36.293 lat (msec) : 50=30.60%, 100=55.89%, 250=13.51% 00:19:36.293 cpu : usr=0.70%, sys=2.99%, ctx=1772, majf=0, minf=3721 00:19:36.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.293 issued rwts: total=9300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.293 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.293 job5: (groupid=0, jobs=1): err= 0: pid=673122: Sat Jul 27 01:27:26 2024 00:19:36.293 read: IOPS=553, BW=138MiB/s (145MB/s)(1404MiB/10142msec) 00:19:36.293 slat (usec): min=10, max=345028, avg=1093.92, stdev=9665.54 00:19:36.293 clat (usec): min=1630, max=714163, avg=114415.50, stdev=120277.92 00:19:36.293 lat (usec): min=1683, max=1031.9k, avg=115509.42, stdev=121651.60 00:19:36.293 clat percentiles (msec): 00:19:36.293 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 15], 20.00th=[ 33], 00:19:36.293 | 30.00th=[ 53], 40.00th=[ 69], 50.00th=[ 81], 60.00th=[ 90], 00:19:36.293 | 70.00th=[ 112], 80.00th=[ 159], 90.00th=[ 292], 95.00th=[ 401], 00:19:36.293 | 99.00th=[ 592], 99.50th=[ 693], 99.90th=[ 709], 99.95th=[ 709], 00:19:36.293 | 99.99th=[ 718] 00:19:36.293 bw ( KiB/s): min=26624, max=298496, per=9.56%, avg=142097.00, stdev=83021.14, samples=20 00:19:36.293 iops : min= 104, max= 1166, avg=555.05, stdev=324.29, samples=20 00:19:36.293 lat (msec) : 2=0.09%, 4=1.37%, 10=5.89%, 20=7.11%, 50=14.34% 00:19:36.293 lat (msec) : 100=36.92%, 250=22.65%, 500=10.04%, 750=1.59% 00:19:36.293 cpu : usr=0.33%, sys=1.83%, ctx=1516, majf=0, minf=4097 00:19:36.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:36.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.293 issued rwts: total=5615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.293 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.293 job6: (groupid=0, jobs=1): err= 0: pid=673129: Sat Jul 27 01:27:26 2024 00:19:36.293 read: IOPS=274, BW=68.7MiB/s (72.0MB/s)(697MiB/10141msec) 00:19:36.294 slat (usec): min=13, max=261828, avg=3305.90, stdev=13193.87 00:19:36.294 clat (msec): min=40, max=771, avg=229.49, stdev=113.51 00:19:36.294 lat (msec): min=40, max=771, avg=232.79, stdev=115.68 00:19:36.294 clat percentiles (msec): 00:19:36.294 | 1.00th=[ 53], 5.00th=[ 95], 10.00th=[ 120], 20.00th=[ 148], 00:19:36.294 | 30.00th=[ 161], 40.00th=[ 174], 50.00th=[ 192], 60.00th=[ 230], 00:19:36.294 | 70.00th=[ 259], 80.00th=[ 326], 90.00th=[ 384], 95.00th=[ 426], 00:19:36.294 | 99.00th=[ 651], 99.50th=[ 693], 99.90th=[ 768], 99.95th=[ 776], 00:19:36.294 | 99.99th=[ 776] 00:19:36.294 bw ( KiB/s): min=25600, max=109349, per=4.69%, avg=69663.10, stdev=28464.03, samples=20 00:19:36.294 iops : min= 100, max= 427, avg=272.10, stdev=111.17, samples=20 00:19:36.294 lat (msec) : 50=0.39%, 100=5.46%, 250=61.84%, 500=29.94%, 750=2.19% 00:19:36.294 lat (msec) : 1000=0.18% 00:19:36.294 cpu : usr=0.26%, sys=1.07%, ctx=729, majf=0, minf=4097 00:19:36.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:19:36.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.294 issued rwts: total=2786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.294 job7: (groupid=0, jobs=1): err= 0: pid=673134: Sat Jul 27 01:27:26 2024 00:19:36.294 read: IOPS=567, BW=142MiB/s (149MB/s)(1429MiB/10082msec) 00:19:36.294 slat (usec): min=9, max=650240, avg=1456.10, stdev=11025.37 00:19:36.294 clat (msec): min=4, max=760, avg=111.33, stdev=104.55 00:19:36.294 lat (msec): min=4, max=981, avg=112.79, stdev=105.72 00:19:36.294 clat percentiles (msec): 00:19:36.294 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 42], 20.00th=[ 54], 00:19:36.294 | 30.00th=[ 64], 40.00th=[ 77], 50.00th=[ 87], 60.00th=[ 97], 00:19:36.294 | 70.00th=[ 109], 80.00th=[ 124], 90.00th=[ 188], 95.00th=[ 355], 00:19:36.294 | 99.00th=[ 684], 99.50th=[ 726], 99.90th=[ 760], 99.95th=[ 760], 00:19:36.294 | 99.99th=[ 760] 00:19:36.294 bw ( KiB/s): min=35328, max=287744, per=10.25%, avg=152303.00, stdev=66221.99, samples=19 00:19:36.294 iops : min= 138, max= 1124, avg=594.84, stdev=258.69, samples=19 00:19:36.294 lat (msec) : 10=1.78%, 20=2.45%, 50=12.09%, 100=46.55%, 250=29.39% 00:19:36.294 lat (msec) : 500=6.56%, 750=0.80%, 1000=0.38% 00:19:36.294 cpu : usr=0.39%, sys=2.04%, ctx=1294, majf=0, minf=4097 00:19:36.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:36.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.294 issued rwts: total=5717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.294 job8: (groupid=0, jobs=1): err= 0: pid=673135: Sat Jul 27 01:27:26 2024 00:19:36.294 read: IOPS=728, BW=182MiB/s (191MB/s)(1848MiB/10144msec) 00:19:36.294 slat (usec): min=10, max=225748, avg=1221.04, stdev=5017.88 00:19:36.294 clat (msec): min=3, max=417, avg=86.53, stdev=63.10 00:19:36.294 lat (msec): min=3, max=609, avg=87.76, stdev=64.08 00:19:36.294 clat percentiles (msec): 00:19:36.294 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 39], 00:19:36.294 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 57], 60.00th=[ 86], 00:19:36.294 | 70.00th=[ 110], 80.00th=[ 144], 90.00th=[ 171], 95.00th=[ 188], 00:19:36.294 | 99.00th=[ 326], 99.50th=[ 355], 99.90th=[ 401], 99.95th=[ 409], 00:19:36.294 | 99.99th=[ 418] 00:19:36.294 bw ( KiB/s): min=47616, max=386560, per=12.62%, avg=187619.20, stdev=103910.38, samples=20 00:19:36.294 iops : min= 186, max= 1510, avg=732.85, stdev=405.91, samples=20 00:19:36.294 lat (msec) : 4=0.08%, 10=1.62%, 20=2.18%, 50=41.38%, 100=21.22% 00:19:36.294 lat (msec) : 250=31.45%, 500=2.07% 00:19:36.294 cpu : usr=0.59%, sys=2.39%, ctx=1550, majf=0, minf=4097 00:19:36.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:36.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.294 issued rwts: total=7393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.294 job9: (groupid=0, jobs=1): err= 0: pid=673136: Sat Jul 27 01:27:26 2024 00:19:36.294 read: IOPS=478, BW=120MiB/s (125MB/s)(1206MiB/10081msec) 00:19:36.294 slat (usec): min=9, max=351170, avg=1217.76, stdev=8846.79 00:19:36.294 clat (usec): min=981, max=741469, avg=132491.56, stdev=126577.11 00:19:36.294 lat (usec): min=1008, max=741485, avg=133709.32, stdev=127401.98 00:19:36.294 clat percentiles (msec): 00:19:36.294 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 23], 20.00th=[ 39], 00:19:36.294 | 30.00th=[ 54], 40.00th=[ 72], 50.00th=[ 94], 60.00th=[ 113], 00:19:36.294 | 70.00th=[ 142], 80.00th=[ 220], 90.00th=[ 313], 95.00th=[ 376], 00:19:36.294 | 99.00th=[ 651], 99.50th=[ 709], 99.90th=[ 726], 99.95th=[ 735], 00:19:36.294 | 99.99th=[ 743] 00:19:36.294 bw ( KiB/s): min=26112, max=270848, per=8.19%, avg=121805.70, stdev=58994.83, samples=20 00:19:36.294 iops : min= 102, max= 1058, avg=475.75, stdev=230.43, samples=20 00:19:36.294 lat (usec) : 1000=0.02% 00:19:36.294 lat (msec) : 2=0.89%, 4=0.27%, 10=2.09%, 20=5.54%, 50=17.88% 00:19:36.294 lat (msec) : 100=25.63%, 250=32.14%, 500=13.52%, 750=2.01% 00:19:36.294 cpu : usr=0.32%, sys=1.66%, ctx=1389, majf=0, minf=4097 00:19:36.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:36.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.294 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.294 job10: (groupid=0, jobs=1): err= 0: pid=673137: Sat Jul 27 01:27:26 2024 00:19:36.294 read: IOPS=571, BW=143MiB/s (150MB/s)(1431MiB/10016msec) 00:19:36.294 slat (usec): min=10, max=227064, avg=1285.11, stdev=6427.74 00:19:36.294 clat (usec): min=1243, max=834183, avg=110648.30, stdev=105321.55 00:19:36.294 lat (usec): min=1267, max=834248, avg=111933.40, stdev=106791.05 00:19:36.294 clat percentiles (msec): 00:19:36.294 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 22], 20.00th=[ 34], 00:19:36.294 | 30.00th=[ 37], 40.00th=[ 50], 50.00th=[ 90], 60.00th=[ 115], 00:19:36.294 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 241], 95.00th=[ 321], 00:19:36.294 | 99.00th=[ 542], 99.50th=[ 600], 99.90th=[ 718], 99.95th=[ 793], 00:19:36.294 | 99.99th=[ 835] 00:19:36.294 bw ( KiB/s): min=27648, max=374272, per=9.75%, avg=144874.00, stdev=102730.32, samples=20 00:19:36.294 iops : min= 108, max= 1462, avg=565.85, stdev=401.31, samples=20 00:19:36.294 lat (msec) : 2=0.16%, 4=0.54%, 10=3.70%, 20=4.94%, 50=31.14% 00:19:36.294 lat (msec) : 100=13.56%, 250=37.39%, 500=7.06%, 750=1.43%, 1000=0.07% 00:19:36.294 cpu : usr=0.35%, sys=1.77%, ctx=1447, majf=0, minf=4097 00:19:36.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:36.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.294 issued rwts: total=5723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.294 00:19:36.294 Run status group 0 (all jobs): 00:19:36.294 READ: bw=1452MiB/s (1522MB/s), 68.7MiB/s-231MiB/s (72.0MB/s-242MB/s), io=14.4GiB (15.4GB), run=10015-10149msec 00:19:36.294 00:19:36.294 Disk stats (read/write): 00:19:36.294 nvme0n1: ios=6582/0, merge=0/0, ticks=1232582/0, in_queue=1232582, util=97.08% 00:19:36.294 nvme10n1: ios=13202/0, merge=0/0, ticks=1232737/0, in_queue=1232737, util=97.30% 00:19:36.294 nvme1n1: ios=8865/0, merge=0/0, ticks=1235680/0, in_queue=1235680, util=97.61% 00:19:36.294 nvme2n1: ios=5595/0, merge=0/0, ticks=1229549/0, in_queue=1229549, util=97.78% 00:19:36.294 nvme3n1: ios=18403/0, merge=0/0, ticks=1231214/0, in_queue=1231214, util=97.86% 00:19:36.294 nvme4n1: ios=11046/0, merge=0/0, ticks=1237584/0, in_queue=1237584, util=98.20% 00:19:36.294 nvme5n1: ios=5394/0, merge=0/0, ticks=1224374/0, in_queue=1224374, util=98.36% 00:19:36.294 nvme6n1: ios=11231/0, merge=0/0, ticks=1233236/0, in_queue=1233236, util=98.48% 00:19:36.294 nvme7n1: ios=14601/0, merge=0/0, ticks=1229060/0, in_queue=1229060, util=98.91% 00:19:36.294 nvme8n1: ios=9446/0, merge=0/0, ticks=1240443/0, in_queue=1240443, util=99.10% 00:19:36.294 nvme9n1: ios=11043/0, merge=0/0, ticks=1239016/0, in_queue=1239016, util=99.23% 00:19:36.294 01:27:26 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:36.294 [global] 00:19:36.294 thread=1 00:19:36.294 invalidate=1 00:19:36.294 rw=randwrite 00:19:36.294 time_based=1 00:19:36.294 runtime=10 00:19:36.294 ioengine=libaio 00:19:36.294 direct=1 00:19:36.294 bs=262144 00:19:36.294 iodepth=64 00:19:36.294 norandommap=1 00:19:36.294 numjobs=1 00:19:36.294 00:19:36.294 [job0] 00:19:36.294 filename=/dev/nvme0n1 00:19:36.294 [job1] 00:19:36.294 filename=/dev/nvme10n1 00:19:36.294 [job2] 00:19:36.294 filename=/dev/nvme1n1 00:19:36.294 [job3] 00:19:36.294 filename=/dev/nvme2n1 00:19:36.294 [job4] 00:19:36.294 filename=/dev/nvme3n1 00:19:36.294 [job5] 00:19:36.294 filename=/dev/nvme4n1 00:19:36.294 [job6] 00:19:36.294 filename=/dev/nvme5n1 00:19:36.294 [job7] 00:19:36.294 filename=/dev/nvme6n1 00:19:36.294 [job8] 00:19:36.294 filename=/dev/nvme7n1 00:19:36.294 [job9] 00:19:36.294 filename=/dev/nvme8n1 00:19:36.294 [job10] 00:19:36.294 filename=/dev/nvme9n1 00:19:36.294 Could not set queue depth (nvme0n1) 00:19:36.294 Could not set queue depth (nvme10n1) 00:19:36.294 Could not set queue depth (nvme1n1) 00:19:36.294 Could not set queue depth (nvme2n1) 00:19:36.294 Could not set queue depth (nvme3n1) 00:19:36.294 Could not set queue depth (nvme4n1) 00:19:36.294 Could not set queue depth (nvme5n1) 00:19:36.294 Could not set queue depth (nvme6n1) 00:19:36.294 Could not set queue depth (nvme7n1) 00:19:36.294 Could not set queue depth (nvme8n1) 00:19:36.294 Could not set queue depth (nvme9n1) 00:19:36.295 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:36.295 fio-3.35 00:19:36.295 Starting 11 threads 00:19:46.269 00:19:46.269 job0: (groupid=0, jobs=1): err= 0: pid=674102: Sat Jul 27 01:27:37 2024 00:19:46.269 write: IOPS=439, BW=110MiB/s (115MB/s)(1110MiB/10093msec); 0 zone resets 00:19:46.269 slat (usec): min=18, max=79537, avg=1938.25, stdev=4592.71 00:19:46.269 clat (msec): min=4, max=434, avg=143.50, stdev=74.64 00:19:46.269 lat (msec): min=5, max=438, avg=145.44, stdev=75.35 00:19:46.269 clat percentiles (msec): 00:19:46.269 | 1.00th=[ 15], 5.00th=[ 45], 10.00th=[ 72], 20.00th=[ 85], 00:19:46.269 | 30.00th=[ 100], 40.00th=[ 120], 50.00th=[ 134], 60.00th=[ 144], 00:19:46.269 | 70.00th=[ 163], 80.00th=[ 190], 90.00th=[ 226], 95.00th=[ 317], 00:19:46.269 | 99.00th=[ 393], 99.50th=[ 405], 99.90th=[ 426], 99.95th=[ 430], 00:19:46.269 | 99.99th=[ 435] 00:19:46.269 bw ( KiB/s): min=47104, max=192512, per=9.52%, avg=112040.70, stdev=41049.18, samples=20 00:19:46.269 iops : min= 184, max= 752, avg=437.65, stdev=160.35, samples=20 00:19:46.269 lat (msec) : 10=0.23%, 20=1.04%, 50=4.14%, 100=25.45%, 250=60.97% 00:19:46.269 lat (msec) : 500=8.18% 00:19:46.269 cpu : usr=1.52%, sys=1.34%, ctx=1699, majf=0, minf=1 00:19:46.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:46.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.269 issued rwts: total=0,4440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.269 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.269 job1: (groupid=0, jobs=1): err= 0: pid=674129: Sat Jul 27 01:27:37 2024 00:19:46.269 write: IOPS=461, BW=115MiB/s (121MB/s)(1179MiB/10225msec); 0 zone resets 00:19:46.269 slat (usec): min=20, max=128642, avg=1356.95, stdev=4494.08 00:19:46.269 clat (usec): min=1391, max=1130.9k, avg=137302.48, stdev=124463.25 00:19:46.269 lat (usec): min=1419, max=1130.9k, avg=138659.43, stdev=125346.52 00:19:46.269 clat percentiles (msec): 00:19:46.269 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 34], 00:19:46.269 | 30.00th=[ 89], 40.00th=[ 116], 50.00th=[ 134], 60.00th=[ 146], 00:19:46.269 | 70.00th=[ 165], 80.00th=[ 190], 90.00th=[ 241], 95.00th=[ 300], 00:19:46.269 | 99.00th=[ 877], 99.50th=[ 1045], 99.90th=[ 1116], 99.95th=[ 1116], 00:19:46.269 | 99.99th=[ 1133] 00:19:46.269 bw ( KiB/s): min=36864, max=298496, per=10.12%, avg=119104.50, stdev=54922.41, samples=20 00:19:46.270 iops : min= 144, max= 1166, avg=465.25, stdev=214.54, samples=20 00:19:46.270 lat (msec) : 2=0.11%, 4=0.49%, 10=12.39%, 20=3.94%, 50=5.62% 00:19:46.270 lat (msec) : 100=11.83%, 250=56.78%, 500=7.66%, 1000=0.68%, 2000=0.51% 00:19:46.270 cpu : usr=1.30%, sys=1.64%, ctx=2810, majf=0, minf=1 00:19:46.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:46.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.270 issued rwts: total=0,4715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.270 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.270 job2: (groupid=0, jobs=1): err= 0: pid=674164: Sat Jul 27 01:27:37 2024 00:19:46.270 write: IOPS=424, BW=106MiB/s (111MB/s)(1068MiB/10072msec); 0 zone resets 00:19:46.270 slat (usec): min=23, max=79381, avg=1553.12, stdev=4635.62 00:19:46.270 clat (msec): min=3, max=392, avg=149.19, stdev=82.93 00:19:46.270 lat (msec): min=5, max=392, avg=150.74, stdev=84.00 00:19:46.270 clat percentiles (msec): 00:19:46.270 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 47], 20.00th=[ 73], 00:19:46.270 | 30.00th=[ 99], 40.00th=[ 113], 50.00th=[ 140], 60.00th=[ 161], 00:19:46.270 | 70.00th=[ 190], 80.00th=[ 226], 90.00th=[ 271], 95.00th=[ 313], 00:19:46.270 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 376], 99.95th=[ 384], 00:19:46.270 | 99.99th=[ 393] 00:19:46.270 bw ( KiB/s): min=53248, max=169984, per=9.16%, avg=107787.40, stdev=36628.64, samples=20 00:19:46.270 iops : min= 208, max= 664, avg=421.00, stdev=143.07, samples=20 00:19:46.270 lat (msec) : 4=0.02%, 10=0.37%, 20=1.92%, 50=9.06%, 100=20.64% 00:19:46.270 lat (msec) : 250=54.11%, 500=13.88% 00:19:46.270 cpu : usr=1.06%, sys=1.50%, ctx=2555, majf=0, minf=1 00:19:46.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:46.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.270 issued rwts: total=0,4273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.270 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.270 job3: (groupid=0, jobs=1): err= 0: pid=674194: Sat Jul 27 01:27:37 2024 00:19:46.270 write: IOPS=401, BW=100MiB/s (105MB/s)(1015MiB/10103msec); 0 zone resets 00:19:46.270 slat (usec): min=24, max=56588, avg=2370.08, stdev=4825.00 00:19:46.270 clat (msec): min=11, max=343, avg=156.78, stdev=63.90 00:19:46.270 lat (msec): min=11, max=344, avg=159.15, stdev=64.79 00:19:46.270 clat percentiles (msec): 00:19:46.270 | 1.00th=[ 23], 5.00th=[ 62], 10.00th=[ 70], 20.00th=[ 106], 00:19:46.270 | 30.00th=[ 126], 40.00th=[ 140], 50.00th=[ 157], 60.00th=[ 169], 00:19:46.270 | 70.00th=[ 188], 80.00th=[ 201], 90.00th=[ 230], 95.00th=[ 296], 00:19:46.270 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 342], 99.95th=[ 342], 00:19:46.270 | 99.99th=[ 347] 00:19:46.270 bw ( KiB/s): min=49152, max=204800, per=8.70%, avg=102345.30, stdev=37464.27, samples=20 00:19:46.270 iops : min= 192, max= 800, avg=399.75, stdev=146.36, samples=20 00:19:46.270 lat (msec) : 20=0.84%, 50=2.12%, 100=15.27%, 250=74.78%, 500=6.99% 00:19:46.270 cpu : usr=1.23%, sys=1.20%, ctx=1268, majf=0, minf=1 00:19:46.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:46.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.270 issued rwts: total=0,4061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.270 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.270 job4: (groupid=0, jobs=1): err= 0: pid=674201: Sat Jul 27 01:27:37 2024 00:19:46.270 write: IOPS=297, BW=74.4MiB/s (78.0MB/s)(762MiB/10245msec); 0 zone resets 00:19:46.270 slat (usec): min=23, max=513324, avg=2751.60, stdev=12803.89 00:19:46.270 clat (usec): min=1617, max=1160.7k, avg=212303.20, stdev=153643.89 00:19:46.270 lat (usec): min=1665, max=1160.8k, avg=215054.80, stdev=155113.64 00:19:46.270 clat percentiles (msec): 00:19:46.270 | 1.00th=[ 16], 5.00th=[ 57], 10.00th=[ 91], 20.00th=[ 130], 00:19:46.270 | 30.00th=[ 140], 40.00th=[ 169], 50.00th=[ 188], 60.00th=[ 203], 00:19:46.270 | 70.00th=[ 222], 80.00th=[ 253], 90.00th=[ 334], 95.00th=[ 493], 00:19:46.270 | 99.00th=[ 927], 99.50th=[ 1083], 99.90th=[ 1099], 99.95th=[ 1167], 00:19:46.270 | 99.99th=[ 1167] 00:19:46.270 bw ( KiB/s): min= 6144, max=145408, per=6.49%, avg=76375.85, stdev=35380.24, samples=20 00:19:46.270 iops : min= 24, max= 568, avg=298.30, stdev=138.21, samples=20 00:19:46.270 lat (msec) : 2=0.13%, 4=0.20%, 10=0.23%, 20=1.38%, 50=2.69% 00:19:46.270 lat (msec) : 100=8.01%, 250=66.16%, 500=16.41%, 750=2.66%, 1000=1.15% 00:19:46.270 lat (msec) : 2000=0.98% 00:19:46.270 cpu : usr=0.93%, sys=0.97%, ctx=1442, majf=0, minf=1 00:19:46.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:46.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.270 issued rwts: total=0,3047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.270 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.270 job5: (groupid=0, jobs=1): err= 0: pid=674207: Sat Jul 27 01:27:37 2024 00:19:46.270 write: IOPS=319, BW=79.9MiB/s (83.7MB/s)(818MiB/10244msec); 0 zone resets 00:19:46.270 slat (usec): min=24, max=513202, avg=2067.22, stdev=10563.21 00:19:46.270 clat (msec): min=5, max=1154, avg=198.05, stdev=129.53 00:19:46.270 lat (msec): min=7, max=1154, avg=200.12, stdev=131.13 00:19:46.270 clat percentiles (msec): 00:19:46.270 | 1.00th=[ 36], 5.00th=[ 62], 10.00th=[ 89], 20.00th=[ 126], 00:19:46.270 | 30.00th=[ 161], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 194], 00:19:46.270 | 70.00th=[ 207], 80.00th=[ 232], 90.00th=[ 288], 95.00th=[ 330], 00:19:46.270 | 99.00th=[ 911], 99.50th=[ 1083], 99.90th=[ 1133], 99.95th=[ 1150], 00:19:46.270 | 99.99th=[ 1150] 00:19:46.270 bw ( KiB/s): min=18944, max=116224, per=6.98%, avg=82135.35, stdev=28699.94, samples=20 00:19:46.270 iops : min= 74, max= 454, avg=320.80, stdev=112.12, samples=20 00:19:46.270 lat (msec) : 10=0.12%, 20=0.06%, 50=3.42%, 100=8.92%, 250=70.97% 00:19:46.270 lat (msec) : 500=14.46%, 750=0.12%, 1000=1.13%, 2000=0.79% 00:19:46.270 cpu : usr=1.07%, sys=1.00%, ctx=1946, majf=0, minf=1 00:19:46.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:46.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.270 issued rwts: total=0,3272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.270 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.270 job6: (groupid=0, jobs=1): err= 0: pid=674208: Sat Jul 27 01:27:37 2024 00:19:46.270 write: IOPS=466, BW=117MiB/s (122MB/s)(1174MiB/10060msec); 0 zone resets 00:19:46.270 slat (usec): min=14, max=64685, avg=1607.89, stdev=4449.68 00:19:46.270 clat (usec): min=1681, max=459865, avg=135473.09, stdev=83306.02 00:19:46.270 lat (usec): min=1720, max=459906, avg=137080.98, stdev=84428.22 00:19:46.270 clat percentiles (msec): 00:19:46.270 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 39], 20.00th=[ 60], 00:19:46.270 | 30.00th=[ 84], 40.00th=[ 112], 50.00th=[ 129], 60.00th=[ 142], 00:19:46.270 | 70.00th=[ 165], 80.00th=[ 194], 90.00th=[ 249], 95.00th=[ 305], 00:19:46.270 | 99.00th=[ 368], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 447], 00:19:46.270 | 99.99th=[ 460] 00:19:46.270 bw ( KiB/s): min=51200, max=233984, per=10.08%, avg=118587.40, stdev=48979.21, samples=20 00:19:46.270 iops : min= 200, max= 914, avg=463.20, stdev=191.35, samples=20 00:19:46.270 lat (msec) : 2=0.04%, 4=0.28%, 10=1.28%, 20=2.88%, 50=9.22% 00:19:46.270 lat (msec) : 100=22.66%, 250=53.95%, 500=9.69% 00:19:46.270 cpu : usr=1.21%, sys=1.47%, ctx=2443, majf=0, minf=1 00:19:46.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:46.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.270 issued rwts: total=0,4695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.270 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.270 job7: (groupid=0, jobs=1): err= 0: pid=674209: Sat Jul 27 01:27:37 2024 00:19:46.270 write: IOPS=403, BW=101MiB/s (106MB/s)(1035MiB/10246msec); 0 zone resets 00:19:46.270 slat (usec): min=14, max=340830, avg=1969.16, stdev=7930.32 00:19:46.270 clat (msec): min=2, max=1232, avg=156.40, stdev=134.01 00:19:46.270 lat (msec): min=2, max=1232, avg=158.37, stdev=135.48 00:19:46.270 clat percentiles (msec): 00:19:46.270 | 1.00th=[ 18], 5.00th=[ 38], 10.00th=[ 63], 20.00th=[ 85], 00:19:46.270 | 30.00th=[ 90], 40.00th=[ 105], 50.00th=[ 138], 60.00th=[ 163], 00:19:46.270 | 70.00th=[ 188], 80.00th=[ 205], 90.00th=[ 232], 95.00th=[ 305], 00:19:46.270 | 99.00th=[ 1045], 99.50th=[ 1133], 99.90th=[ 1217], 99.95th=[ 1217], 00:19:46.270 | 99.99th=[ 1234] 00:19:46.270 bw ( KiB/s): min= 8704, max=193536, per=8.87%, avg=104338.90, stdev=49368.07, samples=20 00:19:46.270 iops : min= 34, max= 756, avg=407.55, stdev=192.86, samples=20 00:19:46.270 lat (msec) : 4=0.17%, 10=0.43%, 20=0.94%, 50=5.85%, 100=31.66% 00:19:46.270 lat (msec) : 250=53.53%, 500=5.80%, 750=0.10%, 1000=0.51%, 2000=1.01% 00:19:46.270 cpu : usr=1.23%, sys=1.31%, ctx=1959, majf=0, minf=1 00:19:46.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:46.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.270 issued rwts: total=0,4138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.270 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.270 job8: (groupid=0, jobs=1): err= 0: pid=674211: Sat Jul 27 01:27:37 2024 00:19:46.270 write: IOPS=451, BW=113MiB/s (118MB/s)(1155MiB/10228msec); 0 zone resets 00:19:46.270 slat (usec): min=19, max=108538, avg=1623.56, stdev=4449.14 00:19:46.270 clat (usec): min=1793, max=924764, avg=139950.16, stdev=85732.81 00:19:46.270 lat (usec): min=1844, max=924808, avg=141573.71, stdev=86688.74 00:19:46.270 clat percentiles (msec): 00:19:46.270 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 81], 00:19:46.270 | 30.00th=[ 88], 40.00th=[ 104], 50.00th=[ 130], 60.00th=[ 155], 00:19:46.270 | 70.00th=[ 171], 80.00th=[ 194], 90.00th=[ 243], 95.00th=[ 309], 00:19:46.270 | 99.00th=[ 384], 99.50th=[ 405], 99.90th=[ 726], 99.95th=[ 927], 00:19:46.270 | 99.99th=[ 927] 00:19:46.270 bw ( KiB/s): min=47104, max=200704, per=9.92%, avg=116678.40, stdev=49000.72, samples=20 00:19:46.270 iops : min= 184, max= 784, avg=455.75, stdev=191.37, samples=20 00:19:46.271 lat (msec) : 2=0.04%, 4=0.17%, 10=2.34%, 20=3.42%, 50=5.91% 00:19:46.271 lat (msec) : 100=26.53%, 250=52.22%, 500=9.26%, 750=0.02%, 1000=0.09% 00:19:46.271 cpu : usr=1.25%, sys=1.46%, ctx=2409, majf=0, minf=1 00:19:46.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:46.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.271 issued rwts: total=0,4621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.271 job9: (groupid=0, jobs=1): err= 0: pid=674212: Sat Jul 27 01:27:37 2024 00:19:46.271 write: IOPS=512, BW=128MiB/s (134MB/s)(1314MiB/10244msec); 0 zone resets 00:19:46.271 slat (usec): min=16, max=199768, avg=1188.53, stdev=5220.94 00:19:46.271 clat (msec): min=2, max=1149, avg=123.51, stdev=111.06 00:19:46.271 lat (msec): min=2, max=1149, avg=124.70, stdev=111.61 00:19:46.271 clat percentiles (msec): 00:19:46.271 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 52], 00:19:46.271 | 30.00th=[ 66], 40.00th=[ 81], 50.00th=[ 103], 60.00th=[ 130], 00:19:46.271 | 70.00th=[ 148], 80.00th=[ 182], 90.00th=[ 220], 95.00th=[ 266], 00:19:46.271 | 99.00th=[ 776], 99.50th=[ 877], 99.90th=[ 1062], 99.95th=[ 1083], 00:19:46.271 | 99.99th=[ 1150] 00:19:46.271 bw ( KiB/s): min=59392, max=231936, per=11.29%, avg=132900.65, stdev=53056.89, samples=20 00:19:46.271 iops : min= 232, max= 906, avg=519.10, stdev=207.27, samples=20 00:19:46.271 lat (msec) : 4=0.25%, 10=3.22%, 20=5.02%, 50=10.01%, 100=30.70% 00:19:46.271 lat (msec) : 250=44.25%, 500=5.25%, 750=0.25%, 1000=0.91%, 2000=0.13% 00:19:46.271 cpu : usr=1.37%, sys=1.68%, ctx=2998, majf=0, minf=1 00:19:46.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:46.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.271 issued rwts: total=0,5254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.271 job10: (groupid=0, jobs=1): err= 0: pid=674213: Sat Jul 27 01:27:37 2024 00:19:46.271 write: IOPS=447, BW=112MiB/s (117MB/s)(1147MiB/10248msec); 0 zone resets 00:19:46.271 slat (usec): min=24, max=340742, avg=1509.67, stdev=7092.16 00:19:46.271 clat (msec): min=11, max=1170, avg=141.21, stdev=117.28 00:19:46.271 lat (msec): min=11, max=1170, avg=142.72, stdev=118.61 00:19:46.271 clat percentiles (msec): 00:19:46.271 | 1.00th=[ 28], 5.00th=[ 42], 10.00th=[ 68], 20.00th=[ 85], 00:19:46.271 | 30.00th=[ 90], 40.00th=[ 101], 50.00th=[ 120], 60.00th=[ 132], 00:19:46.271 | 70.00th=[ 153], 80.00th=[ 186], 90.00th=[ 224], 95.00th=[ 262], 00:19:46.271 | 99.00th=[ 885], 99.50th=[ 1053], 99.90th=[ 1150], 99.95th=[ 1150], 00:19:46.271 | 99.99th=[ 1167] 00:19:46.271 bw ( KiB/s): min=16896, max=192000, per=9.85%, avg=115846.95, stdev=51296.49, samples=20 00:19:46.271 iops : min= 66, max= 750, avg=452.50, stdev=200.40, samples=20 00:19:46.271 lat (msec) : 20=0.33%, 50=6.32%, 100=33.33%, 250=53.23%, 500=5.34% 00:19:46.271 lat (msec) : 750=0.09%, 1000=0.81%, 2000=0.57% 00:19:46.271 cpu : usr=1.34%, sys=1.48%, ctx=2468, majf=0, minf=1 00:19:46.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:46.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:46.271 issued rwts: total=0,4588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.271 00:19:46.271 Run status group 0 (all jobs): 00:19:46.271 WRITE: bw=1149MiB/s (1205MB/s), 74.4MiB/s-128MiB/s (78.0MB/s-134MB/s), io=11.5GiB (12.3GB), run=10060-10248msec 00:19:46.271 00:19:46.271 Disk stats (read/write): 00:19:46.271 nvme0n1: ios=49/8631, merge=0/0, ticks=81/1211347, in_queue=1211428, util=97.55% 00:19:46.271 nvme10n1: ios=48/9394, merge=0/0, ticks=1663/1238712, in_queue=1240375, util=99.52% 00:19:46.271 nvme1n1: ios=50/8298, merge=0/0, ticks=4215/1214303, in_queue=1218518, util=99.84% 00:19:46.271 nvme2n1: ios=33/7936, merge=0/0, ticks=51/1206730, in_queue=1206781, util=97.72% 00:19:46.271 nvme3n1: ios=48/6037, merge=0/0, ticks=86/1232100, in_queue=1232186, util=98.10% 00:19:46.271 nvme4n1: ios=50/6488, merge=0/0, ticks=1776/1239158, in_queue=1240934, util=99.85% 00:19:46.271 nvme5n1: ios=0/9097, merge=0/0, ticks=0/1216062, in_queue=1216062, util=98.25% 00:19:46.271 nvme6n1: ios=0/8226, merge=0/0, ticks=0/1221440, in_queue=1221440, util=98.43% 00:19:46.271 nvme7n1: ios=0/9204, merge=0/0, ticks=0/1241452, in_queue=1241452, util=98.82% 00:19:46.271 nvme8n1: ios=44/10460, merge=0/0, ticks=3088/1194869, in_queue=1197957, util=99.98% 00:19:46.271 nvme9n1: ios=44/9124, merge=0/0, ticks=2253/1239019, in_queue=1241272, util=99.93% 00:19:46.271 01:27:37 -- target/multiconnection.sh@36 -- # sync 00:19:46.271 01:27:37 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:46.271 01:27:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:46.271 01:27:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:46.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:46.271 01:27:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:46.271 01:27:37 -- common/autotest_common.sh@1198 -- # local i=0 00:19:46.271 01:27:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:46.271 01:27:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:19:46.271 01:27:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:46.271 01:27:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:19:46.271 01:27:37 -- common/autotest_common.sh@1210 -- # return 0 00:19:46.271 01:27:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.271 01:27:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.271 01:27:37 -- common/autotest_common.sh@10 -- # set +x 00:19:46.271 01:27:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.271 01:27:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:46.271 01:27:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:46.271 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:46.271 01:27:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:46.271 01:27:37 -- common/autotest_common.sh@1198 -- # local i=0 00:19:46.271 01:27:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:46.271 01:27:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:19:46.271 01:27:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:46.271 01:27:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:19:46.271 01:27:37 -- common/autotest_common.sh@1210 -- # return 0 00:19:46.271 01:27:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:46.271 01:27:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.271 01:27:37 -- common/autotest_common.sh@10 -- # set +x 00:19:46.271 01:27:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.271 01:27:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:46.271 01:27:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:46.530 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:46.530 01:27:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:46.530 01:27:38 -- common/autotest_common.sh@1198 -- # local i=0 00:19:46.530 01:27:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:46.530 01:27:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:19:46.530 01:27:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:46.530 01:27:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:19:46.530 01:27:38 -- common/autotest_common.sh@1210 -- # return 0 00:19:46.530 01:27:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:46.530 01:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.530 01:27:38 -- common/autotest_common.sh@10 -- # set +x 00:19:46.530 01:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.530 01:27:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:46.530 01:27:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:46.789 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:46.789 01:27:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:46.789 01:27:38 -- common/autotest_common.sh@1198 -- # local i=0 00:19:46.789 01:27:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:46.789 01:27:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:19:46.789 01:27:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:46.789 01:27:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:19:46.789 01:27:38 -- common/autotest_common.sh@1210 -- # return 0 00:19:46.789 01:27:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:46.789 01:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.789 01:27:38 -- common/autotest_common.sh@10 -- # set +x 00:19:46.789 01:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.789 01:27:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:46.789 01:27:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:47.049 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:47.049 01:27:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:47.049 01:27:38 -- common/autotest_common.sh@1198 -- # local i=0 00:19:47.049 01:27:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:47.049 01:27:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:19:47.049 01:27:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:47.049 01:27:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:19:47.049 01:27:38 -- common/autotest_common.sh@1210 -- # return 0 00:19:47.049 01:27:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:47.049 01:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.049 01:27:38 -- common/autotest_common.sh@10 -- # set +x 00:19:47.049 01:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.049 01:27:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.049 01:27:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:47.309 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:47.309 01:27:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:47.309 01:27:38 -- common/autotest_common.sh@1198 -- # local i=0 00:19:47.309 01:27:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:47.309 01:27:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:19:47.309 01:27:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:47.309 01:27:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:19:47.309 01:27:38 -- common/autotest_common.sh@1210 -- # return 0 00:19:47.309 01:27:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:47.309 01:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.309 01:27:38 -- common/autotest_common.sh@10 -- # set +x 00:19:47.309 01:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.309 01:27:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.309 01:27:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:47.568 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:47.568 01:27:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:47.568 01:27:39 -- common/autotest_common.sh@1198 -- # local i=0 00:19:47.568 01:27:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:47.568 01:27:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:19:47.568 01:27:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:47.568 01:27:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:19:47.568 01:27:39 -- common/autotest_common.sh@1210 -- # return 0 00:19:47.568 01:27:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:47.568 01:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.568 01:27:39 -- common/autotest_common.sh@10 -- # set +x 00:19:47.827 01:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.827 01:27:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.827 01:27:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:47.827 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:47.827 01:27:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:47.827 01:27:39 -- common/autotest_common.sh@1198 -- # local i=0 00:19:47.827 01:27:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:47.827 01:27:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:19:47.827 01:27:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:47.827 01:27:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:19:47.827 01:27:39 -- common/autotest_common.sh@1210 -- # return 0 00:19:47.827 01:27:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:47.827 01:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.827 01:27:39 -- common/autotest_common.sh@10 -- # set +x 00:19:47.827 01:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.827 01:27:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:47.827 01:27:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:48.085 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:48.085 01:27:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:48.085 01:27:39 -- common/autotest_common.sh@1198 -- # local i=0 00:19:48.085 01:27:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:48.085 01:27:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:19:48.085 01:27:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:48.085 01:27:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:19:48.085 01:27:39 -- common/autotest_common.sh@1210 -- # return 0 00:19:48.085 01:27:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:48.085 01:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.085 01:27:39 -- common/autotest_common.sh@10 -- # set +x 00:19:48.085 01:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.085 01:27:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:48.085 01:27:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:48.085 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:48.085 01:27:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:48.085 01:27:39 -- common/autotest_common.sh@1198 -- # local i=0 00:19:48.085 01:27:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:48.085 01:27:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:19:48.085 01:27:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:48.085 01:27:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:19:48.085 01:27:39 -- common/autotest_common.sh@1210 -- # return 0 00:19:48.085 01:27:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:48.085 01:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.085 01:27:39 -- common/autotest_common.sh@10 -- # set +x 00:19:48.085 01:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.085 01:27:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:48.085 01:27:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:48.344 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:48.344 01:27:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:48.344 01:27:39 -- common/autotest_common.sh@1198 -- # local i=0 00:19:48.344 01:27:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:48.344 01:27:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:19:48.344 01:27:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:48.344 01:27:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:19:48.344 01:27:39 -- common/autotest_common.sh@1210 -- # return 0 00:19:48.344 01:27:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:48.344 01:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.344 01:27:39 -- common/autotest_common.sh@10 -- # set +x 00:19:48.344 01:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.344 01:27:39 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:48.344 01:27:39 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:48.344 01:27:39 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:48.344 01:27:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:48.344 01:27:39 -- nvmf/common.sh@116 -- # sync 00:19:48.344 01:27:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:48.344 01:27:39 -- nvmf/common.sh@119 -- # set +e 00:19:48.344 01:27:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:48.344 01:27:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:48.344 rmmod nvme_tcp 00:19:48.344 rmmod nvme_fabrics 00:19:48.344 rmmod nvme_keyring 00:19:48.344 01:27:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:48.344 01:27:39 -- nvmf/common.sh@123 -- # set -e 00:19:48.344 01:27:39 -- nvmf/common.sh@124 -- # return 0 00:19:48.344 01:27:39 -- nvmf/common.sh@477 -- # '[' -n 667970 ']' 00:19:48.344 01:27:39 -- nvmf/common.sh@478 -- # killprocess 667970 00:19:48.344 01:27:39 -- common/autotest_common.sh@926 -- # '[' -z 667970 ']' 00:19:48.344 01:27:39 -- common/autotest_common.sh@930 -- # kill -0 667970 00:19:48.344 01:27:39 -- common/autotest_common.sh@931 -- # uname 00:19:48.344 01:27:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:48.344 01:27:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 667970 00:19:48.344 01:27:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:48.344 01:27:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:48.344 01:27:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 667970' 00:19:48.344 killing process with pid 667970 00:19:48.344 01:27:39 -- common/autotest_common.sh@945 -- # kill 667970 00:19:48.344 01:27:39 -- common/autotest_common.sh@950 -- # wait 667970 00:19:48.910 01:27:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:48.910 01:27:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:48.910 01:27:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:48.910 01:27:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.910 01:27:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:48.910 01:27:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.910 01:27:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.910 01:27:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.449 01:27:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:51.449 00:19:51.449 real 1m1.525s 00:19:51.449 user 3m15.757s 00:19:51.449 sys 0m25.355s 00:19:51.449 01:27:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.449 01:27:42 -- common/autotest_common.sh@10 -- # set +x 00:19:51.449 ************************************ 00:19:51.449 END TEST nvmf_multiconnection 00:19:51.449 ************************************ 00:19:51.449 01:27:42 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:51.449 01:27:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:51.449 01:27:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:51.449 01:27:42 -- common/autotest_common.sh@10 -- # set +x 00:19:51.449 ************************************ 00:19:51.449 START TEST nvmf_initiator_timeout 00:19:51.449 ************************************ 00:19:51.449 01:27:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:51.449 * Looking for test storage... 00:19:51.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:51.449 01:27:42 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.449 01:27:42 -- nvmf/common.sh@7 -- # uname -s 00:19:51.449 01:27:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.449 01:27:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.449 01:27:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.449 01:27:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.449 01:27:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.449 01:27:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.449 01:27:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.449 01:27:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.449 01:27:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.449 01:27:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.449 01:27:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.449 01:27:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.449 01:27:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.449 01:27:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.449 01:27:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.449 01:27:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.449 01:27:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.449 01:27:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.449 01:27:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.449 01:27:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.449 01:27:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.449 01:27:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.449 01:27:42 -- paths/export.sh@5 -- # export PATH 00:19:51.449 01:27:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.449 01:27:42 -- nvmf/common.sh@46 -- # : 0 00:19:51.449 01:27:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:51.449 01:27:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:51.449 01:27:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:51.449 01:27:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.449 01:27:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.450 01:27:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:51.450 01:27:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:51.450 01:27:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:51.450 01:27:42 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:51.450 01:27:42 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:51.450 01:27:42 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:51.450 01:27:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:51.450 01:27:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.450 01:27:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:51.450 01:27:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:51.450 01:27:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:51.450 01:27:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.450 01:27:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.450 01:27:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.450 01:27:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:51.450 01:27:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:51.450 01:27:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:51.450 01:27:42 -- common/autotest_common.sh@10 -- # set +x 00:19:53.353 01:27:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:53.353 01:27:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:53.353 01:27:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:53.353 01:27:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:53.353 01:27:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:53.353 01:27:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:53.353 01:27:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:53.353 01:27:44 -- nvmf/common.sh@294 -- # net_devs=() 00:19:53.353 01:27:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:53.353 01:27:44 -- nvmf/common.sh@295 -- # e810=() 00:19:53.353 01:27:44 -- nvmf/common.sh@295 -- # local -ga e810 00:19:53.353 01:27:44 -- nvmf/common.sh@296 -- # x722=() 00:19:53.353 01:27:44 -- nvmf/common.sh@296 -- # local -ga x722 00:19:53.353 01:27:44 -- nvmf/common.sh@297 -- # mlx=() 00:19:53.353 01:27:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:53.353 01:27:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.353 01:27:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:53.353 01:27:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:53.353 01:27:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:53.353 01:27:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:53.353 01:27:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:53.353 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:53.353 01:27:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:53.353 01:27:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:53.353 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:53.353 01:27:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:53.353 01:27:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:53.353 01:27:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:53.353 01:27:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.353 01:27:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:53.353 01:27:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.354 01:27:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:53.354 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:53.354 01:27:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.354 01:27:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:53.354 01:27:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.354 01:27:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:53.354 01:27:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.354 01:27:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:53.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:53.354 01:27:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.354 01:27:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:53.354 01:27:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:53.354 01:27:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:53.354 01:27:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:53.354 01:27:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:53.354 01:27:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.354 01:27:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.354 01:27:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.354 01:27:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:53.354 01:27:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.354 01:27:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.354 01:27:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:53.354 01:27:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.354 01:27:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.354 01:27:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:53.354 01:27:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:53.354 01:27:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.354 01:27:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.354 01:27:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.354 01:27:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.354 01:27:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:53.354 01:27:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.354 01:27:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.354 01:27:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.354 01:27:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:53.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:19:53.354 00:19:53.354 --- 10.0.0.2 ping statistics --- 00:19:53.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.354 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:19:53.354 01:27:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:19:53.354 00:19:53.354 --- 10.0.0.1 ping statistics --- 00:19:53.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.354 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:53.354 01:27:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.354 01:27:44 -- nvmf/common.sh@410 -- # return 0 00:19:53.354 01:27:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:53.354 01:27:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.354 01:27:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:53.354 01:27:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:53.354 01:27:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.354 01:27:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:53.354 01:27:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:53.354 01:27:44 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:53.354 01:27:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:53.354 01:27:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:53.354 01:27:44 -- common/autotest_common.sh@10 -- # set +x 00:19:53.354 01:27:44 -- nvmf/common.sh@469 -- # nvmfpid=677449 00:19:53.354 01:27:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:53.354 01:27:44 -- nvmf/common.sh@470 -- # waitforlisten 677449 00:19:53.354 01:27:44 -- common/autotest_common.sh@819 -- # '[' -z 677449 ']' 00:19:53.354 01:27:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.354 01:27:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:53.354 01:27:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.354 01:27:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:53.354 01:27:44 -- common/autotest_common.sh@10 -- # set +x 00:19:53.354 [2024-07-27 01:27:44.852753] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:53.354 [2024-07-27 01:27:44.852835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.354 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.354 [2024-07-27 01:27:44.917842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.354 [2024-07-27 01:27:45.029229] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:53.354 [2024-07-27 01:27:45.029389] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.354 [2024-07-27 01:27:45.029407] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.354 [2024-07-27 01:27:45.029420] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.354 [2024-07-27 01:27:45.029488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.354 [2024-07-27 01:27:45.029558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.354 [2024-07-27 01:27:45.029605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.354 [2024-07-27 01:27:45.029608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.329 01:27:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.329 01:27:45 -- common/autotest_common.sh@852 -- # return 0 00:19:54.329 01:27:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:54.329 01:27:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:54.329 01:27:45 -- common/autotest_common.sh@10 -- # set +x 00:19:54.329 01:27:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.329 01:27:45 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:54.329 01:27:45 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:54.329 01:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.329 01:27:45 -- common/autotest_common.sh@10 -- # set +x 00:19:54.329 Malloc0 00:19:54.329 01:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.329 01:27:45 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:54.329 01:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.329 01:27:45 -- common/autotest_common.sh@10 -- # set +x 00:19:54.329 Delay0 00:19:54.329 01:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.329 01:27:45 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:54.329 01:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.329 01:27:45 -- common/autotest_common.sh@10 -- # set +x 00:19:54.329 [2024-07-27 01:27:45.914128] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.329 01:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.329 01:27:45 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:54.329 01:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.329 01:27:45 -- common/autotest_common.sh@10 -- # set +x 00:19:54.329 01:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.329 01:27:45 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:54.329 01:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.329 01:27:45 -- common/autotest_common.sh@10 -- # set +x 00:19:54.329 01:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.329 01:27:45 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.329 01:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.329 01:27:45 -- common/autotest_common.sh@10 -- # set +x 00:19:54.329 [2024-07-27 01:27:45.942426] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.330 01:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.330 01:27:45 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:54.896 01:27:46 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:54.896 01:27:46 -- common/autotest_common.sh@1177 -- # local i=0 00:19:54.896 01:27:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:54.896 01:27:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:54.896 01:27:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:57.435 01:27:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:57.435 01:27:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:57.435 01:27:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:57.435 01:27:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:57.435 01:27:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:57.435 01:27:48 -- common/autotest_common.sh@1187 -- # return 0 00:19:57.435 01:27:48 -- target/initiator_timeout.sh@35 -- # fio_pid=678024 00:19:57.435 01:27:48 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:57.435 01:27:48 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:57.435 [global] 00:19:57.435 thread=1 00:19:57.435 invalidate=1 00:19:57.435 rw=write 00:19:57.435 time_based=1 00:19:57.435 runtime=60 00:19:57.435 ioengine=libaio 00:19:57.435 direct=1 00:19:57.435 bs=4096 00:19:57.435 iodepth=1 00:19:57.435 norandommap=0 00:19:57.435 numjobs=1 00:19:57.435 00:19:57.435 verify_dump=1 00:19:57.435 verify_backlog=512 00:19:57.435 verify_state_save=0 00:19:57.435 do_verify=1 00:19:57.435 verify=crc32c-intel 00:19:57.435 [job0] 00:19:57.435 filename=/dev/nvme0n1 00:19:57.435 Could not set queue depth (nvme0n1) 00:19:57.435 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:57.435 fio-3.35 00:19:57.435 Starting 1 thread 00:19:59.966 01:27:51 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:59.966 01:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.966 01:27:51 -- common/autotest_common.sh@10 -- # set +x 00:19:59.966 true 00:19:59.966 01:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.966 01:27:51 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:59.966 01:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.966 01:27:51 -- common/autotest_common.sh@10 -- # set +x 00:19:59.966 true 00:19:59.966 01:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.966 01:27:51 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:59.966 01:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.966 01:27:51 -- common/autotest_common.sh@10 -- # set +x 00:19:59.966 true 00:19:59.966 01:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.966 01:27:51 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:59.966 01:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.966 01:27:51 -- common/autotest_common.sh@10 -- # set +x 00:19:59.966 true 00:19:59.966 01:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.966 01:27:51 -- target/initiator_timeout.sh@45 -- # sleep 3 00:20:03.256 01:27:54 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:20:03.256 01:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.256 01:27:54 -- common/autotest_common.sh@10 -- # set +x 00:20:03.256 true 00:20:03.256 01:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.256 01:27:54 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:20:03.256 01:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.256 01:27:54 -- common/autotest_common.sh@10 -- # set +x 00:20:03.256 true 00:20:03.256 01:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.256 01:27:54 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:20:03.256 01:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.256 01:27:54 -- common/autotest_common.sh@10 -- # set +x 00:20:03.256 true 00:20:03.256 01:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.256 01:27:54 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:20:03.256 01:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.256 01:27:54 -- common/autotest_common.sh@10 -- # set +x 00:20:03.256 true 00:20:03.256 01:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.256 01:27:54 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:20:03.256 01:27:54 -- target/initiator_timeout.sh@54 -- # wait 678024 00:20:59.494 00:20:59.494 job0: (groupid=0, jobs=1): err= 0: pid=678093: Sat Jul 27 01:28:49 2024 00:20:59.494 read: IOPS=42, BW=169KiB/s (174kB/s)(9.93MiB/60004msec) 00:20:59.494 slat (usec): min=5, max=7830, avg=27.08, stdev=155.22 00:20:59.494 clat (usec): min=356, max=41083k, avg=23239.92, stdev=814847.33 00:20:59.494 lat (usec): min=361, max=41083k, avg=23266.99, stdev=814847.30 00:20:59.494 clat percentiles (usec): 00:20:59.494 | 1.00th=[ 392], 5.00th=[ 412], 10.00th=[ 429], 00:20:59.494 | 20.00th=[ 465], 30.00th=[ 490], 40.00th=[ 529], 00:20:59.494 | 50.00th=[ 545], 60.00th=[ 578], 70.00th=[ 594], 00:20:59.494 | 80.00th=[ 627], 90.00th=[ 41157], 95.00th=[ 42206], 00:20:59.494 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:20:59.494 | 99.95th=[ 42206], 99.99th=[17112761] 00:20:59.494 write: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60004msec); 0 zone resets 00:20:59.494 slat (nsec): min=5608, max=74775, avg=21766.30, stdev=11957.80 00:20:59.494 clat (usec): min=221, max=518, avg=300.78, stdev=50.88 00:20:59.494 lat (usec): min=227, max=545, avg=322.55, stdev=54.38 00:20:59.494 clat percentiles (usec): 00:20:59.494 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:20:59.494 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 310], 00:20:59.494 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 383], 95.00th=[ 400], 00:20:59.494 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 474], 99.95th=[ 490], 00:20:59.494 | 99.99th=[ 519] 00:20:59.494 bw ( KiB/s): min= 2904, max= 5288, per=100.00%, avg=4096.00, stdev=842.87, samples=5 00:20:59.494 iops : min= 726, max= 1322, avg=1024.00, stdev=210.72, samples=5 00:20:59.494 lat (usec) : 250=7.41%, 500=58.76%, 750=25.77%, 1000=0.04% 00:20:59.494 lat (msec) : 2=0.04%, 50=7.96%, >=2000=0.02% 00:20:59.494 cpu : usr=0.15%, sys=0.18%, ctx=5103, majf=0, minf=2 00:20:59.494 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.494 issued rwts: total=2542,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.494 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.494 00:20:59.494 Run status group 0 (all jobs): 00:20:59.494 READ: bw=169KiB/s (174kB/s), 169KiB/s-169KiB/s (174kB/s-174kB/s), io=9.93MiB (10.4MB), run=60004-60004msec 00:20:59.494 WRITE: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60004-60004msec 00:20:59.494 00:20:59.494 Disk stats (read/write): 00:20:59.494 nvme0n1: ios=2638/2560, merge=0/0, ticks=17973/737, in_queue=18710, util=99.77% 00:20:59.494 01:28:49 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:59.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:59.494 01:28:49 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:59.494 01:28:49 -- common/autotest_common.sh@1198 -- # local i=0 00:20:59.494 01:28:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:59.494 01:28:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:59.494 01:28:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:59.494 01:28:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:59.494 01:28:49 -- common/autotest_common.sh@1210 -- # return 0 00:20:59.494 01:28:49 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:59.494 01:28:49 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:59.494 nvmf hotplug test: fio successful as expected 00:20:59.494 01:28:49 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.494 01:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:59.494 01:28:49 -- common/autotest_common.sh@10 -- # set +x 00:20:59.494 01:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:59.494 01:28:49 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:59.494 01:28:49 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:59.494 01:28:49 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:59.494 01:28:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:59.494 01:28:49 -- nvmf/common.sh@116 -- # sync 00:20:59.494 01:28:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:59.494 01:28:49 -- nvmf/common.sh@119 -- # set +e 00:20:59.494 01:28:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:59.494 01:28:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:59.494 rmmod nvme_tcp 00:20:59.494 rmmod nvme_fabrics 00:20:59.494 rmmod nvme_keyring 00:20:59.494 01:28:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:59.494 01:28:49 -- nvmf/common.sh@123 -- # set -e 00:20:59.494 01:28:49 -- nvmf/common.sh@124 -- # return 0 00:20:59.494 01:28:49 -- nvmf/common.sh@477 -- # '[' -n 677449 ']' 00:20:59.494 01:28:49 -- nvmf/common.sh@478 -- # killprocess 677449 00:20:59.494 01:28:49 -- common/autotest_common.sh@926 -- # '[' -z 677449 ']' 00:20:59.494 01:28:49 -- common/autotest_common.sh@930 -- # kill -0 677449 00:20:59.494 01:28:49 -- common/autotest_common.sh@931 -- # uname 00:20:59.494 01:28:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.494 01:28:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 677449 00:20:59.494 01:28:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:59.494 01:28:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:59.494 01:28:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 677449' 00:20:59.494 killing process with pid 677449 00:20:59.494 01:28:49 -- common/autotest_common.sh@945 -- # kill 677449 00:20:59.494 01:28:49 -- common/autotest_common.sh@950 -- # wait 677449 00:20:59.494 01:28:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:59.494 01:28:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:59.494 01:28:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:59.494 01:28:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.494 01:28:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:59.494 01:28:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.494 01:28:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.494 01:28:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.065 01:28:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:00.065 00:21:00.065 real 1m8.917s 00:21:00.065 user 4m14.245s 00:21:00.065 sys 0m6.328s 00:21:00.065 01:28:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.065 01:28:51 -- common/autotest_common.sh@10 -- # set +x 00:21:00.065 ************************************ 00:21:00.065 END TEST nvmf_initiator_timeout 00:21:00.065 ************************************ 00:21:00.065 01:28:51 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:21:00.065 01:28:51 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:21:00.065 01:28:51 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:21:00.065 01:28:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:00.065 01:28:51 -- common/autotest_common.sh@10 -- # set +x 00:21:01.970 01:28:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:01.970 01:28:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:01.970 01:28:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:01.970 01:28:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:01.970 01:28:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:01.970 01:28:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:01.970 01:28:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:01.970 01:28:53 -- nvmf/common.sh@294 -- # net_devs=() 00:21:01.970 01:28:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:01.970 01:28:53 -- nvmf/common.sh@295 -- # e810=() 00:21:01.970 01:28:53 -- nvmf/common.sh@295 -- # local -ga e810 00:21:01.970 01:28:53 -- nvmf/common.sh@296 -- # x722=() 00:21:01.970 01:28:53 -- nvmf/common.sh@296 -- # local -ga x722 00:21:01.970 01:28:53 -- nvmf/common.sh@297 -- # mlx=() 00:21:01.970 01:28:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:01.970 01:28:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.970 01:28:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.970 01:28:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.970 01:28:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.970 01:28:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.970 01:28:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.970 01:28:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.970 01:28:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.970 01:28:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.970 01:28:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.971 01:28:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.971 01:28:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:01.971 01:28:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:01.971 01:28:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:01.971 01:28:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:01.971 01:28:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:01.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:01.971 01:28:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:01.971 01:28:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:01.971 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:01.971 01:28:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:01.971 01:28:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:01.971 01:28:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.971 01:28:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:01.971 01:28:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.971 01:28:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:01.971 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:01.971 01:28:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.971 01:28:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:01.971 01:28:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.971 01:28:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:01.971 01:28:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.971 01:28:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:01.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:01.971 01:28:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.971 01:28:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:01.971 01:28:53 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.971 01:28:53 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:21:01.971 01:28:53 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:01.971 01:28:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:01.971 01:28:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:01.971 01:28:53 -- common/autotest_common.sh@10 -- # set +x 00:21:01.971 ************************************ 00:21:01.971 START TEST nvmf_perf_adq 00:21:01.971 ************************************ 00:21:01.971 01:28:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:01.971 * Looking for test storage... 00:21:01.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:01.971 01:28:53 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.971 01:28:53 -- nvmf/common.sh@7 -- # uname -s 00:21:01.971 01:28:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.971 01:28:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.971 01:28:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.971 01:28:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.971 01:28:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.971 01:28:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.971 01:28:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.971 01:28:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.971 01:28:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.971 01:28:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.971 01:28:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.971 01:28:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.971 01:28:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.971 01:28:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.971 01:28:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.971 01:28:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.971 01:28:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.971 01:28:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.971 01:28:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.971 01:28:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.971 01:28:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.971 01:28:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.971 01:28:53 -- paths/export.sh@5 -- # export PATH 00:21:01.971 01:28:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.971 01:28:53 -- nvmf/common.sh@46 -- # : 0 00:21:01.971 01:28:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:01.971 01:28:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:01.971 01:28:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:01.971 01:28:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.971 01:28:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.971 01:28:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:01.971 01:28:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:01.971 01:28:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:01.971 01:28:53 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:01.971 01:28:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:01.971 01:28:53 -- common/autotest_common.sh@10 -- # set +x 00:21:03.942 01:28:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:03.942 01:28:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:03.942 01:28:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:03.942 01:28:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:03.942 01:28:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:03.942 01:28:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:03.942 01:28:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:03.942 01:28:55 -- nvmf/common.sh@294 -- # net_devs=() 00:21:03.942 01:28:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:03.942 01:28:55 -- nvmf/common.sh@295 -- # e810=() 00:21:03.942 01:28:55 -- nvmf/common.sh@295 -- # local -ga e810 00:21:03.942 01:28:55 -- nvmf/common.sh@296 -- # x722=() 00:21:03.942 01:28:55 -- nvmf/common.sh@296 -- # local -ga x722 00:21:03.942 01:28:55 -- nvmf/common.sh@297 -- # mlx=() 00:21:03.942 01:28:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:03.942 01:28:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.942 01:28:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:03.942 01:28:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:03.942 01:28:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:03.942 01:28:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:03.942 01:28:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:03.942 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:03.942 01:28:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:03.942 01:28:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:03.942 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:03.942 01:28:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:03.942 01:28:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:03.942 01:28:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:03.942 01:28:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.942 01:28:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:03.942 01:28:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.942 01:28:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:03.942 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:03.942 01:28:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.942 01:28:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:03.942 01:28:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.942 01:28:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:03.942 01:28:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.942 01:28:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:03.942 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:03.942 01:28:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.942 01:28:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:03.942 01:28:55 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.942 01:28:55 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:03.942 01:28:55 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:03.942 01:28:55 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:21:03.942 01:28:55 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:04.509 01:28:56 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:06.415 01:28:58 -- target/perf_adq.sh@54 -- # sleep 5 00:21:11.696 01:29:03 -- target/perf_adq.sh@67 -- # nvmftestinit 00:21:11.696 01:29:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:11.696 01:29:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.696 01:29:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:11.696 01:29:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:11.696 01:29:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:11.696 01:29:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.696 01:29:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.696 01:29:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.696 01:29:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:11.696 01:29:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:11.696 01:29:03 -- common/autotest_common.sh@10 -- # set +x 00:21:11.696 01:29:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:11.696 01:29:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:11.696 01:29:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:11.696 01:29:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:11.696 01:29:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:11.696 01:29:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:11.696 01:29:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:11.696 01:29:03 -- nvmf/common.sh@294 -- # net_devs=() 00:21:11.696 01:29:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:11.696 01:29:03 -- nvmf/common.sh@295 -- # e810=() 00:21:11.696 01:29:03 -- nvmf/common.sh@295 -- # local -ga e810 00:21:11.696 01:29:03 -- nvmf/common.sh@296 -- # x722=() 00:21:11.696 01:29:03 -- nvmf/common.sh@296 -- # local -ga x722 00:21:11.696 01:29:03 -- nvmf/common.sh@297 -- # mlx=() 00:21:11.696 01:29:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:11.696 01:29:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.696 01:29:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:11.696 01:29:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:11.696 01:29:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:11.696 01:29:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.696 01:29:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:11.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:11.696 01:29:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.696 01:29:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:11.696 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:11.696 01:29:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:11.696 01:29:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:11.696 01:29:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.696 01:29:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.696 01:29:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.697 01:29:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.697 01:29:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:11.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:11.697 01:29:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.697 01:29:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.697 01:29:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.697 01:29:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.697 01:29:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.697 01:29:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:11.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:11.697 01:29:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.697 01:29:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:11.697 01:29:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:11.697 01:29:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:11.697 01:29:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:11.697 01:29:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:11.697 01:29:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.697 01:29:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.697 01:29:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.697 01:29:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:11.697 01:29:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.697 01:29:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.697 01:29:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:11.697 01:29:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.697 01:29:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.697 01:29:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:11.697 01:29:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:11.697 01:29:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.697 01:29:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.697 01:29:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.697 01:29:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.697 01:29:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:11.697 01:29:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.697 01:29:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.697 01:29:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.697 01:29:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:11.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:21:11.697 00:21:11.697 --- 10.0.0.2 ping statistics --- 00:21:11.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.697 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:21:11.697 01:29:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:21:11.697 00:21:11.697 --- 10.0.0.1 ping statistics --- 00:21:11.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.697 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:21:11.697 01:29:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.697 01:29:03 -- nvmf/common.sh@410 -- # return 0 00:21:11.697 01:29:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:11.697 01:29:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.697 01:29:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:11.697 01:29:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:11.697 01:29:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.697 01:29:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:11.697 01:29:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:11.697 01:29:03 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:11.697 01:29:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:11.697 01:29:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:11.697 01:29:03 -- common/autotest_common.sh@10 -- # set +x 00:21:11.697 01:29:03 -- nvmf/common.sh@469 -- # nvmfpid=689892 00:21:11.697 01:29:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:11.697 01:29:03 -- nvmf/common.sh@470 -- # waitforlisten 689892 00:21:11.697 01:29:03 -- common/autotest_common.sh@819 -- # '[' -z 689892 ']' 00:21:11.697 01:29:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.697 01:29:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:11.697 01:29:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.697 01:29:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:11.697 01:29:03 -- common/autotest_common.sh@10 -- # set +x 00:21:11.697 [2024-07-27 01:29:03.244200] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:11.697 [2024-07-27 01:29:03.244286] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.697 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.697 [2024-07-27 01:29:03.314591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.697 [2024-07-27 01:29:03.429479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:11.697 [2024-07-27 01:29:03.429655] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.697 [2024-07-27 01:29:03.429674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.697 [2024-07-27 01:29:03.429688] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.697 [2024-07-27 01:29:03.429787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.697 [2024-07-27 01:29:03.429857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.697 [2024-07-27 01:29:03.429900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.697 [2024-07-27 01:29:03.429903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.631 01:29:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.631 01:29:04 -- common/autotest_common.sh@852 -- # return 0 00:21:12.631 01:29:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:12.631 01:29:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:12.631 01:29:04 -- common/autotest_common.sh@10 -- # set +x 00:21:12.631 01:29:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.631 01:29:04 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:21:12.631 01:29:04 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:12.631 01:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.631 01:29:04 -- common/autotest_common.sh@10 -- # set +x 00:21:12.631 01:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.631 01:29:04 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:12.631 01:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.631 01:29:04 -- common/autotest_common.sh@10 -- # set +x 00:21:12.631 01:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.631 01:29:04 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:12.631 01:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.631 01:29:04 -- common/autotest_common.sh@10 -- # set +x 00:21:12.631 [2024-07-27 01:29:04.324138] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.631 01:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.631 01:29:04 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:12.631 01:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.631 01:29:04 -- common/autotest_common.sh@10 -- # set +x 00:21:12.631 Malloc1 00:21:12.631 01:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.631 01:29:04 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.631 01:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.631 01:29:04 -- common/autotest_common.sh@10 -- # set +x 00:21:12.631 01:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.631 01:29:04 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:12.631 01:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.631 01:29:04 -- common/autotest_common.sh@10 -- # set +x 00:21:12.631 01:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.631 01:29:04 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.631 01:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.631 01:29:04 -- common/autotest_common.sh@10 -- # set +x 00:21:12.631 [2024-07-27 01:29:04.377405] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.631 01:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.631 01:29:04 -- target/perf_adq.sh@73 -- # perfpid=690049 00:21:12.631 01:29:04 -- target/perf_adq.sh@74 -- # sleep 2 00:21:12.631 01:29:04 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:12.888 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.794 01:29:06 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:21:14.794 01:29:06 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:14.794 01:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.794 01:29:06 -- common/autotest_common.sh@10 -- # set +x 00:21:14.794 01:29:06 -- target/perf_adq.sh@76 -- # wc -l 00:21:14.794 01:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.794 01:29:06 -- target/perf_adq.sh@76 -- # count=4 00:21:14.794 01:29:06 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:21:14.794 01:29:06 -- target/perf_adq.sh@81 -- # wait 690049 00:21:22.911 Initializing NVMe Controllers 00:21:22.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:22.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:22.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:22.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:22.911 Initialization complete. Launching workers. 00:21:22.911 ======================================================== 00:21:22.911 Latency(us) 00:21:22.911 Device Information : IOPS MiB/s Average min max 00:21:22.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11270.51 44.03 5678.19 1073.00 9097.04 00:21:22.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11358.01 44.37 5635.22 1031.61 8305.02 00:21:22.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11062.21 43.21 5785.17 940.09 9503.77 00:21:22.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11005.02 42.99 5815.89 1025.62 9238.60 00:21:22.911 ======================================================== 00:21:22.911 Total : 44695.76 174.59 5727.65 940.09 9503.77 00:21:22.911 00:21:22.911 01:29:14 -- target/perf_adq.sh@82 -- # nvmftestfini 00:21:22.911 01:29:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:22.911 01:29:14 -- nvmf/common.sh@116 -- # sync 00:21:22.911 01:29:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:22.911 01:29:14 -- nvmf/common.sh@119 -- # set +e 00:21:22.911 01:29:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:22.911 01:29:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:22.911 rmmod nvme_tcp 00:21:22.911 rmmod nvme_fabrics 00:21:22.911 rmmod nvme_keyring 00:21:22.911 01:29:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:22.911 01:29:14 -- nvmf/common.sh@123 -- # set -e 00:21:22.911 01:29:14 -- nvmf/common.sh@124 -- # return 0 00:21:22.911 01:29:14 -- nvmf/common.sh@477 -- # '[' -n 689892 ']' 00:21:22.911 01:29:14 -- nvmf/common.sh@478 -- # killprocess 689892 00:21:22.911 01:29:14 -- common/autotest_common.sh@926 -- # '[' -z 689892 ']' 00:21:22.911 01:29:14 -- common/autotest_common.sh@930 -- # kill -0 689892 00:21:22.911 01:29:14 -- common/autotest_common.sh@931 -- # uname 00:21:22.911 01:29:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:22.911 01:29:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 689892 00:21:22.911 01:29:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:22.911 01:29:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:22.912 01:29:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 689892' 00:21:22.912 killing process with pid 689892 00:21:22.912 01:29:14 -- common/autotest_common.sh@945 -- # kill 689892 00:21:22.912 01:29:14 -- common/autotest_common.sh@950 -- # wait 689892 00:21:23.169 01:29:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:23.169 01:29:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:23.169 01:29:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:23.169 01:29:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.169 01:29:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:23.169 01:29:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.169 01:29:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.169 01:29:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.706 01:29:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:25.706 01:29:16 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:21:25.706 01:29:16 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:25.964 01:29:17 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:27.871 01:29:19 -- target/perf_adq.sh@54 -- # sleep 5 00:21:33.156 01:29:24 -- target/perf_adq.sh@87 -- # nvmftestinit 00:21:33.156 01:29:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:33.156 01:29:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.156 01:29:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:33.156 01:29:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:33.156 01:29:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:33.156 01:29:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.156 01:29:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.156 01:29:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.156 01:29:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:33.156 01:29:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:33.156 01:29:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:33.156 01:29:24 -- common/autotest_common.sh@10 -- # set +x 00:21:33.157 01:29:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:33.157 01:29:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:33.157 01:29:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:33.157 01:29:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:33.157 01:29:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:33.157 01:29:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:33.157 01:29:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:33.157 01:29:24 -- nvmf/common.sh@294 -- # net_devs=() 00:21:33.157 01:29:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:33.157 01:29:24 -- nvmf/common.sh@295 -- # e810=() 00:21:33.157 01:29:24 -- nvmf/common.sh@295 -- # local -ga e810 00:21:33.157 01:29:24 -- nvmf/common.sh@296 -- # x722=() 00:21:33.157 01:29:24 -- nvmf/common.sh@296 -- # local -ga x722 00:21:33.157 01:29:24 -- nvmf/common.sh@297 -- # mlx=() 00:21:33.157 01:29:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:33.157 01:29:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.157 01:29:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:33.157 01:29:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:33.157 01:29:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:33.157 01:29:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:33.157 01:29:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:33.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:33.157 01:29:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:33.157 01:29:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:33.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:33.157 01:29:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:33.157 01:29:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:33.157 01:29:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.157 01:29:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:33.157 01:29:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.157 01:29:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:33.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:33.157 01:29:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.157 01:29:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:33.157 01:29:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.157 01:29:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:33.157 01:29:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.157 01:29:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:33.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:33.157 01:29:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.157 01:29:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:33.157 01:29:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:33.157 01:29:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:33.157 01:29:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.157 01:29:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.157 01:29:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.157 01:29:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:33.157 01:29:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.157 01:29:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.157 01:29:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:33.157 01:29:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.157 01:29:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.157 01:29:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:33.157 01:29:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:33.157 01:29:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.157 01:29:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.157 01:29:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.157 01:29:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.157 01:29:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:33.157 01:29:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.157 01:29:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.157 01:29:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.157 01:29:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:33.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:21:33.157 00:21:33.157 --- 10.0.0.2 ping statistics --- 00:21:33.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.157 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:21:33.157 01:29:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:21:33.157 00:21:33.157 --- 10.0.0.1 ping statistics --- 00:21:33.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.157 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:21:33.157 01:29:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.157 01:29:24 -- nvmf/common.sh@410 -- # return 0 00:21:33.157 01:29:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:33.157 01:29:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.157 01:29:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:33.157 01:29:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.157 01:29:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:33.157 01:29:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:33.157 01:29:24 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:21:33.157 01:29:24 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:33.157 01:29:24 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:33.157 01:29:24 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:33.157 net.core.busy_poll = 1 00:21:33.157 01:29:24 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:33.157 net.core.busy_read = 1 00:21:33.157 01:29:24 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:33.157 01:29:24 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:33.157 01:29:24 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:33.157 01:29:24 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:33.157 01:29:24 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:33.157 01:29:24 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:33.157 01:29:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:33.157 01:29:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:33.157 01:29:24 -- common/autotest_common.sh@10 -- # set +x 00:21:33.157 01:29:24 -- nvmf/common.sh@469 -- # nvmfpid=692737 00:21:33.157 01:29:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:33.157 01:29:24 -- nvmf/common.sh@470 -- # waitforlisten 692737 00:21:33.157 01:29:24 -- common/autotest_common.sh@819 -- # '[' -z 692737 ']' 00:21:33.157 01:29:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.157 01:29:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:33.157 01:29:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.157 01:29:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:33.157 01:29:24 -- common/autotest_common.sh@10 -- # set +x 00:21:33.157 [2024-07-27 01:29:24.880241] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:33.157 [2024-07-27 01:29:24.880327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.416 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.416 [2024-07-27 01:29:24.946506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.416 [2024-07-27 01:29:25.054825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:33.416 [2024-07-27 01:29:25.054965] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.416 [2024-07-27 01:29:25.054981] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.416 [2024-07-27 01:29:25.054993] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.416 [2024-07-27 01:29:25.055231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.416 [2024-07-27 01:29:25.055256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.416 [2024-07-27 01:29:25.055315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.416 [2024-07-27 01:29:25.055317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.416 01:29:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:33.416 01:29:25 -- common/autotest_common.sh@852 -- # return 0 00:21:33.416 01:29:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:33.416 01:29:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:33.416 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.416 01:29:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.416 01:29:25 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:21:33.416 01:29:25 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:33.416 01:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.416 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.416 01:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.416 01:29:25 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:33.416 01:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.416 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.674 01:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.674 01:29:25 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:33.674 01:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.674 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.674 [2024-07-27 01:29:25.222659] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.674 01:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.674 01:29:25 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:33.674 01:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.674 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.674 Malloc1 00:21:33.674 01:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.674 01:29:25 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.674 01:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.674 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.674 01:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.674 01:29:25 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:33.674 01:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.674 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.674 01:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.674 01:29:25 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.674 01:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.674 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.674 [2024-07-27 01:29:25.273895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.674 01:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.674 01:29:25 -- target/perf_adq.sh@94 -- # perfpid=692836 00:21:33.674 01:29:25 -- target/perf_adq.sh@95 -- # sleep 2 00:21:33.674 01:29:25 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:33.674 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.573 01:29:27 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:21:35.573 01:29:27 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:35.573 01:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.573 01:29:27 -- target/perf_adq.sh@97 -- # wc -l 00:21:35.573 01:29:27 -- common/autotest_common.sh@10 -- # set +x 00:21:35.573 01:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.573 01:29:27 -- target/perf_adq.sh@97 -- # count=2 00:21:35.573 01:29:27 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:21:35.573 01:29:27 -- target/perf_adq.sh@103 -- # wait 692836 00:21:43.684 Initializing NVMe Controllers 00:21:43.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:43.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:43.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:43.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:43.684 Initialization complete. Launching workers. 00:21:43.684 ======================================================== 00:21:43.684 Latency(us) 00:21:43.684 Device Information : IOPS MiB/s Average min max 00:21:43.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5779.10 22.57 11077.41 1608.69 59228.95 00:21:43.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4645.60 18.15 13784.15 3565.20 58858.99 00:21:43.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11987.50 46.83 5339.21 1639.74 9288.40 00:21:43.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5707.60 22.30 11217.84 1486.35 59211.53 00:21:43.684 ======================================================== 00:21:43.684 Total : 28119.80 109.84 9106.88 1486.35 59228.95 00:21:43.684 00:21:43.684 01:29:35 -- target/perf_adq.sh@104 -- # nvmftestfini 00:21:43.684 01:29:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:43.684 01:29:35 -- nvmf/common.sh@116 -- # sync 00:21:43.684 01:29:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:43.684 01:29:35 -- nvmf/common.sh@119 -- # set +e 00:21:43.684 01:29:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:43.684 01:29:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:43.684 rmmod nvme_tcp 00:21:43.684 rmmod nvme_fabrics 00:21:43.942 rmmod nvme_keyring 00:21:43.942 01:29:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:43.942 01:29:35 -- nvmf/common.sh@123 -- # set -e 00:21:43.942 01:29:35 -- nvmf/common.sh@124 -- # return 0 00:21:43.942 01:29:35 -- nvmf/common.sh@477 -- # '[' -n 692737 ']' 00:21:43.942 01:29:35 -- nvmf/common.sh@478 -- # killprocess 692737 00:21:43.942 01:29:35 -- common/autotest_common.sh@926 -- # '[' -z 692737 ']' 00:21:43.942 01:29:35 -- common/autotest_common.sh@930 -- # kill -0 692737 00:21:43.942 01:29:35 -- common/autotest_common.sh@931 -- # uname 00:21:43.942 01:29:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:43.942 01:29:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 692737 00:21:43.942 01:29:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:43.942 01:29:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:43.942 01:29:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 692737' 00:21:43.942 killing process with pid 692737 00:21:43.942 01:29:35 -- common/autotest_common.sh@945 -- # kill 692737 00:21:43.942 01:29:35 -- common/autotest_common.sh@950 -- # wait 692737 00:21:44.202 01:29:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:44.202 01:29:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:44.202 01:29:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:44.202 01:29:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.202 01:29:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:44.202 01:29:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.202 01:29:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.202 01:29:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.495 01:29:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:47.495 01:29:38 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:21:47.495 00:21:47.495 real 0m45.323s 00:21:47.495 user 2m35.428s 00:21:47.495 sys 0m12.229s 00:21:47.495 01:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.495 01:29:38 -- common/autotest_common.sh@10 -- # set +x 00:21:47.495 ************************************ 00:21:47.495 END TEST nvmf_perf_adq 00:21:47.495 ************************************ 00:21:47.495 01:29:38 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:47.495 01:29:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:47.495 01:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:47.495 01:29:38 -- common/autotest_common.sh@10 -- # set +x 00:21:47.495 ************************************ 00:21:47.495 START TEST nvmf_shutdown 00:21:47.495 ************************************ 00:21:47.495 01:29:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:47.495 * Looking for test storage... 00:21:47.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:47.495 01:29:38 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.495 01:29:38 -- nvmf/common.sh@7 -- # uname -s 00:21:47.495 01:29:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.495 01:29:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.495 01:29:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.495 01:29:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.495 01:29:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.495 01:29:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.495 01:29:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.495 01:29:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.495 01:29:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.495 01:29:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.495 01:29:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.495 01:29:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.495 01:29:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.495 01:29:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.495 01:29:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.495 01:29:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.495 01:29:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.495 01:29:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.495 01:29:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.495 01:29:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.495 01:29:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.495 01:29:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.495 01:29:38 -- paths/export.sh@5 -- # export PATH 00:21:47.495 01:29:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.495 01:29:38 -- nvmf/common.sh@46 -- # : 0 00:21:47.495 01:29:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:47.495 01:29:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:47.495 01:29:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:47.495 01:29:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.495 01:29:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.495 01:29:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:47.495 01:29:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:47.495 01:29:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:47.495 01:29:38 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.495 01:29:38 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.495 01:29:38 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:47.495 01:29:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:47.495 01:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:47.495 01:29:38 -- common/autotest_common.sh@10 -- # set +x 00:21:47.495 ************************************ 00:21:47.495 START TEST nvmf_shutdown_tc1 00:21:47.495 ************************************ 00:21:47.495 01:29:38 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:21:47.495 01:29:38 -- target/shutdown.sh@74 -- # starttarget 00:21:47.495 01:29:38 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:47.495 01:29:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:47.495 01:29:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.495 01:29:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:47.495 01:29:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:47.495 01:29:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:47.495 01:29:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.496 01:29:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.496 01:29:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.496 01:29:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:47.496 01:29:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:47.496 01:29:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:47.496 01:29:38 -- common/autotest_common.sh@10 -- # set +x 00:21:49.402 01:29:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:49.402 01:29:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:49.402 01:29:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:49.402 01:29:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:49.402 01:29:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:49.402 01:29:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:49.402 01:29:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:49.402 01:29:40 -- nvmf/common.sh@294 -- # net_devs=() 00:21:49.402 01:29:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:49.402 01:29:40 -- nvmf/common.sh@295 -- # e810=() 00:21:49.402 01:29:40 -- nvmf/common.sh@295 -- # local -ga e810 00:21:49.402 01:29:40 -- nvmf/common.sh@296 -- # x722=() 00:21:49.402 01:29:40 -- nvmf/common.sh@296 -- # local -ga x722 00:21:49.403 01:29:40 -- nvmf/common.sh@297 -- # mlx=() 00:21:49.403 01:29:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:49.403 01:29:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.403 01:29:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:49.403 01:29:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:49.403 01:29:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:49.403 01:29:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:49.403 01:29:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:49.403 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:49.403 01:29:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:49.403 01:29:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:49.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:49.403 01:29:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:49.403 01:29:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:49.403 01:29:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.403 01:29:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:49.403 01:29:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.403 01:29:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:49.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:49.403 01:29:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.403 01:29:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:49.403 01:29:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.403 01:29:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:49.403 01:29:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.403 01:29:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:49.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:49.403 01:29:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.403 01:29:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:49.403 01:29:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:49.403 01:29:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:49.403 01:29:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.403 01:29:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.403 01:29:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.403 01:29:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:49.403 01:29:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.403 01:29:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.403 01:29:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:49.403 01:29:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.403 01:29:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.403 01:29:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:49.403 01:29:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:49.403 01:29:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.403 01:29:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.403 01:29:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.403 01:29:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.403 01:29:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:49.403 01:29:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.403 01:29:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.403 01:29:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.403 01:29:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:49.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:21:49.403 00:21:49.403 --- 10.0.0.2 ping statistics --- 00:21:49.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.403 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:21:49.403 01:29:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:21:49.403 00:21:49.403 --- 10.0.0.1 ping statistics --- 00:21:49.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.403 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:49.403 01:29:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.403 01:29:40 -- nvmf/common.sh@410 -- # return 0 00:21:49.403 01:29:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:49.403 01:29:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.403 01:29:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:49.403 01:29:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.403 01:29:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:49.403 01:29:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:49.403 01:29:40 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:49.403 01:29:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:49.403 01:29:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:49.403 01:29:40 -- common/autotest_common.sh@10 -- # set +x 00:21:49.403 01:29:40 -- nvmf/common.sh@469 -- # nvmfpid=696107 00:21:49.403 01:29:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:49.403 01:29:40 -- nvmf/common.sh@470 -- # waitforlisten 696107 00:21:49.403 01:29:40 -- common/autotest_common.sh@819 -- # '[' -z 696107 ']' 00:21:49.403 01:29:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.403 01:29:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:49.403 01:29:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.403 01:29:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:49.403 01:29:40 -- common/autotest_common.sh@10 -- # set +x 00:21:49.403 [2024-07-27 01:29:41.020605] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:49.403 [2024-07-27 01:29:41.020697] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.403 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.403 [2024-07-27 01:29:41.090961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.662 [2024-07-27 01:29:41.211596] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:49.662 [2024-07-27 01:29:41.211782] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.662 [2024-07-27 01:29:41.211803] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.662 [2024-07-27 01:29:41.211818] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.662 [2024-07-27 01:29:41.211930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.662 [2024-07-27 01:29:41.212038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.662 [2024-07-27 01:29:41.212118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:49.662 [2024-07-27 01:29:41.212122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.595 01:29:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:50.595 01:29:41 -- common/autotest_common.sh@852 -- # return 0 00:21:50.595 01:29:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:50.595 01:29:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:50.595 01:29:41 -- common/autotest_common.sh@10 -- # set +x 00:21:50.595 01:29:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.595 01:29:42 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.595 01:29:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.595 01:29:42 -- common/autotest_common.sh@10 -- # set +x 00:21:50.595 [2024-07-27 01:29:42.011630] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.595 01:29:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.595 01:29:42 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:50.595 01:29:42 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:50.595 01:29:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:50.595 01:29:42 -- common/autotest_common.sh@10 -- # set +x 00:21:50.595 01:29:42 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.595 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.595 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.595 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.595 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.595 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.595 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.595 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.595 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.595 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.595 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.595 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.595 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.595 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.595 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.596 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.596 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.596 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.596 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.596 01:29:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.596 01:29:42 -- target/shutdown.sh@28 -- # cat 00:21:50.596 01:29:42 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:50.596 01:29:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.596 01:29:42 -- common/autotest_common.sh@10 -- # set +x 00:21:50.596 Malloc1 00:21:50.596 [2024-07-27 01:29:42.097252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.596 Malloc2 00:21:50.596 Malloc3 00:21:50.596 Malloc4 00:21:50.596 Malloc5 00:21:50.596 Malloc6 00:21:50.854 Malloc7 00:21:50.854 Malloc8 00:21:50.854 Malloc9 00:21:50.854 Malloc10 00:21:50.854 01:29:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.854 01:29:42 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:50.854 01:29:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:50.854 01:29:42 -- common/autotest_common.sh@10 -- # set +x 00:21:50.854 01:29:42 -- target/shutdown.sh@78 -- # perfpid=696422 00:21:50.854 01:29:42 -- target/shutdown.sh@79 -- # waitforlisten 696422 /var/tmp/bdevperf.sock 00:21:50.854 01:29:42 -- common/autotest_common.sh@819 -- # '[' -z 696422 ']' 00:21:50.854 01:29:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.854 01:29:42 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:50.854 01:29:42 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:50.854 01:29:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:50.854 01:29:42 -- nvmf/common.sh@520 -- # config=() 00:21:50.854 01:29:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.854 01:29:42 -- nvmf/common.sh@520 -- # local subsystem config 00:21:50.854 01:29:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:50.854 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.854 01:29:42 -- common/autotest_common.sh@10 -- # set +x 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.854 { 00:21:50.854 "params": { 00:21:50.854 "name": "Nvme$subsystem", 00:21:50.854 "trtype": "$TEST_TRANSPORT", 00:21:50.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.854 "adrfam": "ipv4", 00:21:50.854 "trsvcid": "$NVMF_PORT", 00:21:50.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.854 "hdgst": ${hdgst:-false}, 00:21:50.854 "ddgst": ${ddgst:-false} 00:21:50.854 }, 00:21:50.854 "method": "bdev_nvme_attach_controller" 00:21:50.854 } 00:21:50.854 EOF 00:21:50.854 )") 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.854 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.854 { 00:21:50.854 "params": { 00:21:50.854 "name": "Nvme$subsystem", 00:21:50.854 "trtype": "$TEST_TRANSPORT", 00:21:50.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.854 "adrfam": "ipv4", 00:21:50.854 "trsvcid": "$NVMF_PORT", 00:21:50.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.854 "hdgst": ${hdgst:-false}, 00:21:50.854 "ddgst": ${ddgst:-false} 00:21:50.854 }, 00:21:50.854 "method": "bdev_nvme_attach_controller" 00:21:50.854 } 00:21:50.854 EOF 00:21:50.854 )") 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.854 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.854 { 00:21:50.854 "params": { 00:21:50.854 "name": "Nvme$subsystem", 00:21:50.854 "trtype": "$TEST_TRANSPORT", 00:21:50.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.854 "adrfam": "ipv4", 00:21:50.854 "trsvcid": "$NVMF_PORT", 00:21:50.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.854 "hdgst": ${hdgst:-false}, 00:21:50.854 "ddgst": ${ddgst:-false} 00:21:50.854 }, 00:21:50.854 "method": "bdev_nvme_attach_controller" 00:21:50.854 } 00:21:50.854 EOF 00:21:50.854 )") 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.854 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.854 { 00:21:50.854 "params": { 00:21:50.854 "name": "Nvme$subsystem", 00:21:50.854 "trtype": "$TEST_TRANSPORT", 00:21:50.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.854 "adrfam": "ipv4", 00:21:50.854 "trsvcid": "$NVMF_PORT", 00:21:50.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.854 "hdgst": ${hdgst:-false}, 00:21:50.854 "ddgst": ${ddgst:-false} 00:21:50.854 }, 00:21:50.854 "method": "bdev_nvme_attach_controller" 00:21:50.854 } 00:21:50.854 EOF 00:21:50.854 )") 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.854 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.854 { 00:21:50.854 "params": { 00:21:50.854 "name": "Nvme$subsystem", 00:21:50.854 "trtype": "$TEST_TRANSPORT", 00:21:50.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.854 "adrfam": "ipv4", 00:21:50.854 "trsvcid": "$NVMF_PORT", 00:21:50.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.854 "hdgst": ${hdgst:-false}, 00:21:50.854 "ddgst": ${ddgst:-false} 00:21:50.854 }, 00:21:50.854 "method": "bdev_nvme_attach_controller" 00:21:50.854 } 00:21:50.854 EOF 00:21:50.854 )") 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.854 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.854 { 00:21:50.854 "params": { 00:21:50.854 "name": "Nvme$subsystem", 00:21:50.854 "trtype": "$TEST_TRANSPORT", 00:21:50.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.854 "adrfam": "ipv4", 00:21:50.854 "trsvcid": "$NVMF_PORT", 00:21:50.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.854 "hdgst": ${hdgst:-false}, 00:21:50.854 "ddgst": ${ddgst:-false} 00:21:50.854 }, 00:21:50.854 "method": "bdev_nvme_attach_controller" 00:21:50.854 } 00:21:50.854 EOF 00:21:50.854 )") 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.854 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.854 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.854 { 00:21:50.854 "params": { 00:21:50.854 "name": "Nvme$subsystem", 00:21:50.854 "trtype": "$TEST_TRANSPORT", 00:21:50.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.854 "adrfam": "ipv4", 00:21:50.854 "trsvcid": "$NVMF_PORT", 00:21:50.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.854 "hdgst": ${hdgst:-false}, 00:21:50.854 "ddgst": ${ddgst:-false} 00:21:50.854 }, 00:21:50.854 "method": "bdev_nvme_attach_controller" 00:21:50.854 } 00:21:50.854 EOF 00:21:50.854 )") 00:21:50.855 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.855 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.855 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.855 { 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme$subsystem", 00:21:50.855 "trtype": "$TEST_TRANSPORT", 00:21:50.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "$NVMF_PORT", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.855 "hdgst": ${hdgst:-false}, 00:21:50.855 "ddgst": ${ddgst:-false} 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 } 00:21:50.855 EOF 00:21:50.855 )") 00:21:50.855 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.855 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.855 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.855 { 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme$subsystem", 00:21:50.855 "trtype": "$TEST_TRANSPORT", 00:21:50.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "$NVMF_PORT", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.855 "hdgst": ${hdgst:-false}, 00:21:50.855 "ddgst": ${ddgst:-false} 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 } 00:21:50.855 EOF 00:21:50.855 )") 00:21:50.855 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.855 01:29:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.855 01:29:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.855 { 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme$subsystem", 00:21:50.855 "trtype": "$TEST_TRANSPORT", 00:21:50.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "$NVMF_PORT", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.855 "hdgst": ${hdgst:-false}, 00:21:50.855 "ddgst": ${ddgst:-false} 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 } 00:21:50.855 EOF 00:21:50.855 )") 00:21:50.855 01:29:42 -- nvmf/common.sh@542 -- # cat 00:21:50.855 01:29:42 -- nvmf/common.sh@544 -- # jq . 00:21:50.855 01:29:42 -- nvmf/common.sh@545 -- # IFS=, 00:21:50.855 01:29:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme1", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 },{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme2", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 },{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme3", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 },{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme4", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 },{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme5", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 },{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme6", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 },{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme7", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 },{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme8", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 },{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme9", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 },{ 00:21:50.855 "params": { 00:21:50.855 "name": "Nvme10", 00:21:50.855 "trtype": "tcp", 00:21:50.855 "traddr": "10.0.0.2", 00:21:50.855 "adrfam": "ipv4", 00:21:50.855 "trsvcid": "4420", 00:21:50.855 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:50.855 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:50.855 "hdgst": false, 00:21:50.855 "ddgst": false 00:21:50.855 }, 00:21:50.855 "method": "bdev_nvme_attach_controller" 00:21:50.855 }' 00:21:50.855 [2024-07-27 01:29:42.587633] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:50.855 [2024-07-27 01:29:42.587719] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:51.113 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.113 [2024-07-27 01:29:42.651964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.113 [2024-07-27 01:29:42.760555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.008 01:29:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:53.008 01:29:44 -- common/autotest_common.sh@852 -- # return 0 00:21:53.008 01:29:44 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:53.008 01:29:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.008 01:29:44 -- common/autotest_common.sh@10 -- # set +x 00:21:53.009 01:29:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.009 01:29:44 -- target/shutdown.sh@83 -- # kill -9 696422 00:21:53.009 01:29:44 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:53.009 01:29:44 -- target/shutdown.sh@87 -- # sleep 1 00:21:53.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 696422 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:53.572 01:29:45 -- target/shutdown.sh@88 -- # kill -0 696107 00:21:53.572 01:29:45 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:53.572 01:29:45 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:53.572 01:29:45 -- nvmf/common.sh@520 -- # config=() 00:21:53.572 01:29:45 -- nvmf/common.sh@520 -- # local subsystem config 00:21:53.572 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.572 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.572 { 00:21:53.572 "params": { 00:21:53.572 "name": "Nvme$subsystem", 00:21:53.572 "trtype": "$TEST_TRANSPORT", 00:21:53.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.572 "adrfam": "ipv4", 00:21:53.572 "trsvcid": "$NVMF_PORT", 00:21:53.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.572 "hdgst": ${hdgst:-false}, 00:21:53.572 "ddgst": ${ddgst:-false} 00:21:53.572 }, 00:21:53.572 "method": "bdev_nvme_attach_controller" 00:21:53.572 } 00:21:53.572 EOF 00:21:53.572 )") 00:21:53.572 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.572 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.572 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.572 { 00:21:53.572 "params": { 00:21:53.572 "name": "Nvme$subsystem", 00:21:53.572 "trtype": "$TEST_TRANSPORT", 00:21:53.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.572 "adrfam": "ipv4", 00:21:53.572 "trsvcid": "$NVMF_PORT", 00:21:53.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.572 "hdgst": ${hdgst:-false}, 00:21:53.572 "ddgst": ${ddgst:-false} 00:21:53.572 }, 00:21:53.572 "method": "bdev_nvme_attach_controller" 00:21:53.572 } 00:21:53.572 EOF 00:21:53.572 )") 00:21:53.572 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.830 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.830 { 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme$subsystem", 00:21:53.830 "trtype": "$TEST_TRANSPORT", 00:21:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "$NVMF_PORT", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.830 "hdgst": ${hdgst:-false}, 00:21:53.830 "ddgst": ${ddgst:-false} 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 } 00:21:53.830 EOF 00:21:53.830 )") 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.830 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.830 { 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme$subsystem", 00:21:53.830 "trtype": "$TEST_TRANSPORT", 00:21:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "$NVMF_PORT", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.830 "hdgst": ${hdgst:-false}, 00:21:53.830 "ddgst": ${ddgst:-false} 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 } 00:21:53.830 EOF 00:21:53.830 )") 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.830 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.830 { 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme$subsystem", 00:21:53.830 "trtype": "$TEST_TRANSPORT", 00:21:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "$NVMF_PORT", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.830 "hdgst": ${hdgst:-false}, 00:21:53.830 "ddgst": ${ddgst:-false} 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 } 00:21:53.830 EOF 00:21:53.830 )") 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.830 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.830 { 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme$subsystem", 00:21:53.830 "trtype": "$TEST_TRANSPORT", 00:21:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "$NVMF_PORT", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.830 "hdgst": ${hdgst:-false}, 00:21:53.830 "ddgst": ${ddgst:-false} 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 } 00:21:53.830 EOF 00:21:53.830 )") 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.830 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.830 { 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme$subsystem", 00:21:53.830 "trtype": "$TEST_TRANSPORT", 00:21:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "$NVMF_PORT", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.830 "hdgst": ${hdgst:-false}, 00:21:53.830 "ddgst": ${ddgst:-false} 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 } 00:21:53.830 EOF 00:21:53.830 )") 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.830 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.830 { 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme$subsystem", 00:21:53.830 "trtype": "$TEST_TRANSPORT", 00:21:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "$NVMF_PORT", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.830 "hdgst": ${hdgst:-false}, 00:21:53.830 "ddgst": ${ddgst:-false} 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 } 00:21:53.830 EOF 00:21:53.830 )") 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.830 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.830 { 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme$subsystem", 00:21:53.830 "trtype": "$TEST_TRANSPORT", 00:21:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "$NVMF_PORT", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.830 "hdgst": ${hdgst:-false}, 00:21:53.830 "ddgst": ${ddgst:-false} 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 } 00:21:53.830 EOF 00:21:53.830 )") 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.830 01:29:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:53.830 { 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme$subsystem", 00:21:53.830 "trtype": "$TEST_TRANSPORT", 00:21:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "$NVMF_PORT", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.830 "hdgst": ${hdgst:-false}, 00:21:53.830 "ddgst": ${ddgst:-false} 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 } 00:21:53.830 EOF 00:21:53.830 )") 00:21:53.830 01:29:45 -- nvmf/common.sh@542 -- # cat 00:21:53.830 01:29:45 -- nvmf/common.sh@544 -- # jq . 00:21:53.830 01:29:45 -- nvmf/common.sh@545 -- # IFS=, 00:21:53.830 01:29:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme1", 00:21:53.830 "trtype": "tcp", 00:21:53.830 "traddr": "10.0.0.2", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "4420", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.830 "hdgst": false, 00:21:53.830 "ddgst": false 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 },{ 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme2", 00:21:53.830 "trtype": "tcp", 00:21:53.830 "traddr": "10.0.0.2", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "4420", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:53.830 "hdgst": false, 00:21:53.830 "ddgst": false 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 },{ 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme3", 00:21:53.830 "trtype": "tcp", 00:21:53.830 "traddr": "10.0.0.2", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "4420", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:53.830 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:53.830 "hdgst": false, 00:21:53.830 "ddgst": false 00:21:53.830 }, 00:21:53.830 "method": "bdev_nvme_attach_controller" 00:21:53.830 },{ 00:21:53.830 "params": { 00:21:53.830 "name": "Nvme4", 00:21:53.830 "trtype": "tcp", 00:21:53.830 "traddr": "10.0.0.2", 00:21:53.830 "adrfam": "ipv4", 00:21:53.830 "trsvcid": "4420", 00:21:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:53.831 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:53.831 "hdgst": false, 00:21:53.831 "ddgst": false 00:21:53.831 }, 00:21:53.831 "method": "bdev_nvme_attach_controller" 00:21:53.831 },{ 00:21:53.831 "params": { 00:21:53.831 "name": "Nvme5", 00:21:53.831 "trtype": "tcp", 00:21:53.831 "traddr": "10.0.0.2", 00:21:53.831 "adrfam": "ipv4", 00:21:53.831 "trsvcid": "4420", 00:21:53.831 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:53.831 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:53.831 "hdgst": false, 00:21:53.831 "ddgst": false 00:21:53.831 }, 00:21:53.831 "method": "bdev_nvme_attach_controller" 00:21:53.831 },{ 00:21:53.831 "params": { 00:21:53.831 "name": "Nvme6", 00:21:53.831 "trtype": "tcp", 00:21:53.831 "traddr": "10.0.0.2", 00:21:53.831 "adrfam": "ipv4", 00:21:53.831 "trsvcid": "4420", 00:21:53.831 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:53.831 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:53.831 "hdgst": false, 00:21:53.831 "ddgst": false 00:21:53.831 }, 00:21:53.831 "method": "bdev_nvme_attach_controller" 00:21:53.831 },{ 00:21:53.831 "params": { 00:21:53.831 "name": "Nvme7", 00:21:53.831 "trtype": "tcp", 00:21:53.831 "traddr": "10.0.0.2", 00:21:53.831 "adrfam": "ipv4", 00:21:53.831 "trsvcid": "4420", 00:21:53.831 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:53.831 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:53.831 "hdgst": false, 00:21:53.831 "ddgst": false 00:21:53.831 }, 00:21:53.831 "method": "bdev_nvme_attach_controller" 00:21:53.831 },{ 00:21:53.831 "params": { 00:21:53.831 "name": "Nvme8", 00:21:53.831 "trtype": "tcp", 00:21:53.831 "traddr": "10.0.0.2", 00:21:53.831 "adrfam": "ipv4", 00:21:53.831 "trsvcid": "4420", 00:21:53.831 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:53.831 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:53.831 "hdgst": false, 00:21:53.831 "ddgst": false 00:21:53.831 }, 00:21:53.831 "method": "bdev_nvme_attach_controller" 00:21:53.831 },{ 00:21:53.831 "params": { 00:21:53.831 "name": "Nvme9", 00:21:53.831 "trtype": "tcp", 00:21:53.831 "traddr": "10.0.0.2", 00:21:53.831 "adrfam": "ipv4", 00:21:53.831 "trsvcid": "4420", 00:21:53.831 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:53.831 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:53.831 "hdgst": false, 00:21:53.831 "ddgst": false 00:21:53.831 }, 00:21:53.831 "method": "bdev_nvme_attach_controller" 00:21:53.831 },{ 00:21:53.831 "params": { 00:21:53.831 "name": "Nvme10", 00:21:53.831 "trtype": "tcp", 00:21:53.831 "traddr": "10.0.0.2", 00:21:53.831 "adrfam": "ipv4", 00:21:53.831 "trsvcid": "4420", 00:21:53.831 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:53.831 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:53.831 "hdgst": false, 00:21:53.831 "ddgst": false 00:21:53.831 }, 00:21:53.831 "method": "bdev_nvme_attach_controller" 00:21:53.831 }' 00:21:53.831 [2024-07-27 01:29:45.360923] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:53.831 [2024-07-27 01:29:45.361010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696726 ] 00:21:53.831 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.831 [2024-07-27 01:29:45.425610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.831 [2024-07-27 01:29:45.535692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.759 Running I/O for 1 seconds... 00:21:56.692 00:21:56.692 Latency(us) 00:21:56.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.692 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.692 Verification LBA range: start 0x0 length 0x400 00:21:56.692 Nvme1n1 : 1.07 411.45 25.72 0.00 0.00 151987.16 11116.85 128159.29 00:21:56.693 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.693 Verification LBA range: start 0x0 length 0x400 00:21:56.693 Nvme2n1 : 1.10 285.30 17.83 0.00 0.00 218785.91 20583.16 193404.02 00:21:56.693 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.693 Verification LBA range: start 0x0 length 0x400 00:21:56.693 Nvme3n1 : 1.10 443.17 27.70 0.00 0.00 139789.21 11262.48 119615.34 00:21:56.693 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.693 Verification LBA range: start 0x0 length 0x400 00:21:56.693 Nvme4n1 : 1.09 366.80 22.92 0.00 0.00 165768.39 36311.80 184083.34 00:21:56.693 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.693 Verification LBA range: start 0x0 length 0x400 00:21:56.693 Nvme5n1 : 1.06 410.93 25.68 0.00 0.00 148917.28 14757.74 131266.18 00:21:56.693 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.693 Verification LBA range: start 0x0 length 0x400 00:21:56.693 Nvme6n1 : 1.09 289.14 18.07 0.00 0.00 207973.74 36505.98 186413.51 00:21:56.693 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.693 Verification LBA range: start 0x0 length 0x400 00:21:56.693 Nvme7n1 : 1.10 438.63 27.41 0.00 0.00 137540.77 21845.33 117285.17 00:21:56.693 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.693 Verification LBA range: start 0x0 length 0x400 00:21:56.693 Nvme8n1 : 1.15 378.65 23.67 0.00 0.00 153533.10 13592.65 156121.32 00:21:56.693 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.693 Verification LBA range: start 0x0 length 0x400 00:21:56.693 Nvme9n1 : 1.14 422.85 26.43 0.00 0.00 136158.59 20583.16 121168.78 00:21:56.693 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.693 Verification LBA range: start 0x0 length 0x400 00:21:56.693 Nvme10n1 : 1.11 323.65 20.23 0.00 0.00 184192.97 4393.34 177869.56 00:21:56.693 =================================================================================================================== 00:21:56.693 Total : 3770.56 235.66 0.00 0.00 160313.02 4393.34 193404.02 00:21:56.950 01:29:48 -- target/shutdown.sh@93 -- # stoptarget 00:21:56.950 01:29:48 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:56.950 01:29:48 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:56.950 01:29:48 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:56.950 01:29:48 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:56.950 01:29:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:56.950 01:29:48 -- nvmf/common.sh@116 -- # sync 00:21:56.950 01:29:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:56.950 01:29:48 -- nvmf/common.sh@119 -- # set +e 00:21:56.951 01:29:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:56.951 01:29:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:56.951 rmmod nvme_tcp 00:21:56.951 rmmod nvme_fabrics 00:21:56.951 rmmod nvme_keyring 00:21:56.951 01:29:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:56.951 01:29:48 -- nvmf/common.sh@123 -- # set -e 00:21:56.951 01:29:48 -- nvmf/common.sh@124 -- # return 0 00:21:56.951 01:29:48 -- nvmf/common.sh@477 -- # '[' -n 696107 ']' 00:21:56.951 01:29:48 -- nvmf/common.sh@478 -- # killprocess 696107 00:21:56.951 01:29:48 -- common/autotest_common.sh@926 -- # '[' -z 696107 ']' 00:21:56.951 01:29:48 -- common/autotest_common.sh@930 -- # kill -0 696107 00:21:56.951 01:29:48 -- common/autotest_common.sh@931 -- # uname 00:21:56.951 01:29:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:56.951 01:29:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 696107 00:21:56.951 01:29:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:56.951 01:29:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:56.951 01:29:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 696107' 00:21:56.951 killing process with pid 696107 00:21:56.951 01:29:48 -- common/autotest_common.sh@945 -- # kill 696107 00:21:56.951 01:29:48 -- common/autotest_common.sh@950 -- # wait 696107 00:21:57.516 01:29:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:57.516 01:29:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:57.516 01:29:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:57.516 01:29:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:57.516 01:29:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:57.516 01:29:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.516 01:29:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.516 01:29:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.048 01:29:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:00.048 00:22:00.048 real 0m12.328s 00:22:00.048 user 0m36.228s 00:22:00.048 sys 0m3.286s 00:22:00.048 01:29:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.048 01:29:51 -- common/autotest_common.sh@10 -- # set +x 00:22:00.048 ************************************ 00:22:00.048 END TEST nvmf_shutdown_tc1 00:22:00.048 ************************************ 00:22:00.048 01:29:51 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:00.048 01:29:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:00.048 01:29:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:00.048 01:29:51 -- common/autotest_common.sh@10 -- # set +x 00:22:00.048 ************************************ 00:22:00.048 START TEST nvmf_shutdown_tc2 00:22:00.048 ************************************ 00:22:00.048 01:29:51 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:22:00.048 01:29:51 -- target/shutdown.sh@98 -- # starttarget 00:22:00.048 01:29:51 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:00.048 01:29:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:00.048 01:29:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.048 01:29:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:00.048 01:29:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:00.048 01:29:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:00.048 01:29:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.048 01:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.048 01:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.048 01:29:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:00.048 01:29:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:00.048 01:29:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:00.048 01:29:51 -- common/autotest_common.sh@10 -- # set +x 00:22:00.048 01:29:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:00.048 01:29:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:00.048 01:29:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:00.048 01:29:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:00.048 01:29:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:00.048 01:29:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:00.048 01:29:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:00.048 01:29:51 -- nvmf/common.sh@294 -- # net_devs=() 00:22:00.048 01:29:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:00.048 01:29:51 -- nvmf/common.sh@295 -- # e810=() 00:22:00.048 01:29:51 -- nvmf/common.sh@295 -- # local -ga e810 00:22:00.048 01:29:51 -- nvmf/common.sh@296 -- # x722=() 00:22:00.048 01:29:51 -- nvmf/common.sh@296 -- # local -ga x722 00:22:00.048 01:29:51 -- nvmf/common.sh@297 -- # mlx=() 00:22:00.048 01:29:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:00.048 01:29:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.048 01:29:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:00.048 01:29:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:00.048 01:29:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:00.048 01:29:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:00.048 01:29:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:00.048 01:29:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:00.048 01:29:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:00.048 01:29:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:00.048 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:00.048 01:29:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:00.048 01:29:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:00.048 01:29:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.048 01:29:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.048 01:29:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:00.048 01:29:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:00.049 01:29:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:00.049 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:00.049 01:29:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:00.049 01:29:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:00.049 01:29:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.049 01:29:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:00.049 01:29:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.049 01:29:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:00.049 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:00.049 01:29:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.049 01:29:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:00.049 01:29:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.049 01:29:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:00.049 01:29:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.049 01:29:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:00.049 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:00.049 01:29:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.049 01:29:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:00.049 01:29:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:00.049 01:29:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:00.049 01:29:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.049 01:29:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.049 01:29:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.049 01:29:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:00.049 01:29:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.049 01:29:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.049 01:29:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:00.049 01:29:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.049 01:29:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.049 01:29:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:00.049 01:29:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:00.049 01:29:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.049 01:29:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.049 01:29:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.049 01:29:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.049 01:29:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:00.049 01:29:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.049 01:29:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.049 01:29:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.049 01:29:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:00.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:22:00.049 00:22:00.049 --- 10.0.0.2 ping statistics --- 00:22:00.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.049 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:22:00.049 01:29:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:22:00.049 00:22:00.049 --- 10.0.0.1 ping statistics --- 00:22:00.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.049 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:00.049 01:29:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.049 01:29:51 -- nvmf/common.sh@410 -- # return 0 00:22:00.049 01:29:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:00.049 01:29:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.049 01:29:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:00.049 01:29:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.049 01:29:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:00.049 01:29:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:00.049 01:29:51 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:00.049 01:29:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:00.049 01:29:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:00.049 01:29:51 -- common/autotest_common.sh@10 -- # set +x 00:22:00.049 01:29:51 -- nvmf/common.sh@469 -- # nvmfpid=697636 00:22:00.049 01:29:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:00.049 01:29:51 -- nvmf/common.sh@470 -- # waitforlisten 697636 00:22:00.049 01:29:51 -- common/autotest_common.sh@819 -- # '[' -z 697636 ']' 00:22:00.049 01:29:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.049 01:29:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.049 01:29:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.049 01:29:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.049 01:29:51 -- common/autotest_common.sh@10 -- # set +x 00:22:00.049 [2024-07-27 01:29:51.496094] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:00.049 [2024-07-27 01:29:51.496162] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.049 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.049 [2024-07-27 01:29:51.561261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.049 [2024-07-27 01:29:51.677483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.049 [2024-07-27 01:29:51.677640] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.049 [2024-07-27 01:29:51.677662] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.049 [2024-07-27 01:29:51.677678] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.049 [2024-07-27 01:29:51.677762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.049 [2024-07-27 01:29:51.677877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.049 [2024-07-27 01:29:51.677944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.049 [2024-07-27 01:29:51.677942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:00.982 01:29:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.982 01:29:52 -- common/autotest_common.sh@852 -- # return 0 00:22:00.982 01:29:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:00.982 01:29:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:00.982 01:29:52 -- common/autotest_common.sh@10 -- # set +x 00:22:00.982 01:29:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.982 01:29:52 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.982 01:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.982 01:29:52 -- common/autotest_common.sh@10 -- # set +x 00:22:00.982 [2024-07-27 01:29:52.525656] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.982 01:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.982 01:29:52 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:00.982 01:29:52 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:00.982 01:29:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:00.982 01:29:52 -- common/autotest_common.sh@10 -- # set +x 00:22:00.982 01:29:52 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:00.982 01:29:52 -- target/shutdown.sh@28 -- # cat 00:22:00.982 01:29:52 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:00.982 01:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.982 01:29:52 -- common/autotest_common.sh@10 -- # set +x 00:22:00.982 Malloc1 00:22:00.982 [2024-07-27 01:29:52.600882] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.982 Malloc2 00:22:00.982 Malloc3 00:22:00.982 Malloc4 00:22:01.240 Malloc5 00:22:01.240 Malloc6 00:22:01.240 Malloc7 00:22:01.240 Malloc8 00:22:01.240 Malloc9 00:22:01.498 Malloc10 00:22:01.498 01:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.498 01:29:53 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:01.498 01:29:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:01.498 01:29:53 -- common/autotest_common.sh@10 -- # set +x 00:22:01.498 01:29:53 -- target/shutdown.sh@102 -- # perfpid=697830 00:22:01.498 01:29:53 -- target/shutdown.sh@103 -- # waitforlisten 697830 /var/tmp/bdevperf.sock 00:22:01.498 01:29:53 -- common/autotest_common.sh@819 -- # '[' -z 697830 ']' 00:22:01.498 01:29:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.498 01:29:53 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:01.498 01:29:53 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:01.498 01:29:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:01.498 01:29:53 -- nvmf/common.sh@520 -- # config=() 00:22:01.498 01:29:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.498 01:29:53 -- nvmf/common.sh@520 -- # local subsystem config 00:22:01.498 01:29:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:01.498 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.498 01:29:53 -- common/autotest_common.sh@10 -- # set +x 00:22:01.498 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.498 { 00:22:01.498 "params": { 00:22:01.498 "name": "Nvme$subsystem", 00:22:01.498 "trtype": "$TEST_TRANSPORT", 00:22:01.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.498 "adrfam": "ipv4", 00:22:01.498 "trsvcid": "$NVMF_PORT", 00:22:01.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.498 "hdgst": ${hdgst:-false}, 00:22:01.498 "ddgst": ${ddgst:-false} 00:22:01.498 }, 00:22:01.498 "method": "bdev_nvme_attach_controller" 00:22:01.498 } 00:22:01.498 EOF 00:22:01.498 )") 00:22:01.498 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.498 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.498 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.498 { 00:22:01.498 "params": { 00:22:01.498 "name": "Nvme$subsystem", 00:22:01.498 "trtype": "$TEST_TRANSPORT", 00:22:01.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.498 "adrfam": "ipv4", 00:22:01.498 "trsvcid": "$NVMF_PORT", 00:22:01.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.498 "hdgst": ${hdgst:-false}, 00:22:01.498 "ddgst": ${ddgst:-false} 00:22:01.498 }, 00:22:01.498 "method": "bdev_nvme_attach_controller" 00:22:01.498 } 00:22:01.498 EOF 00:22:01.498 )") 00:22:01.498 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.498 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.498 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.498 { 00:22:01.498 "params": { 00:22:01.498 "name": "Nvme$subsystem", 00:22:01.498 "trtype": "$TEST_TRANSPORT", 00:22:01.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.498 "adrfam": "ipv4", 00:22:01.498 "trsvcid": "$NVMF_PORT", 00:22:01.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.498 "hdgst": ${hdgst:-false}, 00:22:01.498 "ddgst": ${ddgst:-false} 00:22:01.498 }, 00:22:01.498 "method": "bdev_nvme_attach_controller" 00:22:01.498 } 00:22:01.498 EOF 00:22:01.498 )") 00:22:01.498 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.499 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.499 { 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme$subsystem", 00:22:01.499 "trtype": "$TEST_TRANSPORT", 00:22:01.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "$NVMF_PORT", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.499 "hdgst": ${hdgst:-false}, 00:22:01.499 "ddgst": ${ddgst:-false} 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 } 00:22:01.499 EOF 00:22:01.499 )") 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.499 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.499 { 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme$subsystem", 00:22:01.499 "trtype": "$TEST_TRANSPORT", 00:22:01.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "$NVMF_PORT", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.499 "hdgst": ${hdgst:-false}, 00:22:01.499 "ddgst": ${ddgst:-false} 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 } 00:22:01.499 EOF 00:22:01.499 )") 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.499 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.499 { 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme$subsystem", 00:22:01.499 "trtype": "$TEST_TRANSPORT", 00:22:01.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "$NVMF_PORT", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.499 "hdgst": ${hdgst:-false}, 00:22:01.499 "ddgst": ${ddgst:-false} 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 } 00:22:01.499 EOF 00:22:01.499 )") 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.499 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.499 { 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme$subsystem", 00:22:01.499 "trtype": "$TEST_TRANSPORT", 00:22:01.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "$NVMF_PORT", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.499 "hdgst": ${hdgst:-false}, 00:22:01.499 "ddgst": ${ddgst:-false} 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 } 00:22:01.499 EOF 00:22:01.499 )") 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.499 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.499 { 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme$subsystem", 00:22:01.499 "trtype": "$TEST_TRANSPORT", 00:22:01.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "$NVMF_PORT", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.499 "hdgst": ${hdgst:-false}, 00:22:01.499 "ddgst": ${ddgst:-false} 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 } 00:22:01.499 EOF 00:22:01.499 )") 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.499 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.499 { 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme$subsystem", 00:22:01.499 "trtype": "$TEST_TRANSPORT", 00:22:01.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "$NVMF_PORT", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.499 "hdgst": ${hdgst:-false}, 00:22:01.499 "ddgst": ${ddgst:-false} 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 } 00:22:01.499 EOF 00:22:01.499 )") 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.499 01:29:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:01.499 { 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme$subsystem", 00:22:01.499 "trtype": "$TEST_TRANSPORT", 00:22:01.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "$NVMF_PORT", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.499 "hdgst": ${hdgst:-false}, 00:22:01.499 "ddgst": ${ddgst:-false} 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 } 00:22:01.499 EOF 00:22:01.499 )") 00:22:01.499 01:29:53 -- nvmf/common.sh@542 -- # cat 00:22:01.499 01:29:53 -- nvmf/common.sh@544 -- # jq . 00:22:01.499 01:29:53 -- nvmf/common.sh@545 -- # IFS=, 00:22:01.499 01:29:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme1", 00:22:01.499 "trtype": "tcp", 00:22:01.499 "traddr": "10.0.0.2", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "4420", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.499 "hdgst": false, 00:22:01.499 "ddgst": false 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 },{ 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme2", 00:22:01.499 "trtype": "tcp", 00:22:01.499 "traddr": "10.0.0.2", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "4420", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:01.499 "hdgst": false, 00:22:01.499 "ddgst": false 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 },{ 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme3", 00:22:01.499 "trtype": "tcp", 00:22:01.499 "traddr": "10.0.0.2", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "4420", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:01.499 "hdgst": false, 00:22:01.499 "ddgst": false 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 },{ 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme4", 00:22:01.499 "trtype": "tcp", 00:22:01.499 "traddr": "10.0.0.2", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "4420", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:01.499 "hdgst": false, 00:22:01.499 "ddgst": false 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 },{ 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme5", 00:22:01.499 "trtype": "tcp", 00:22:01.499 "traddr": "10.0.0.2", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "4420", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:01.499 "hdgst": false, 00:22:01.499 "ddgst": false 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 },{ 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme6", 00:22:01.499 "trtype": "tcp", 00:22:01.499 "traddr": "10.0.0.2", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "4420", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:01.499 "hdgst": false, 00:22:01.499 "ddgst": false 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 },{ 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme7", 00:22:01.499 "trtype": "tcp", 00:22:01.499 "traddr": "10.0.0.2", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "4420", 00:22:01.499 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:01.499 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:01.499 "hdgst": false, 00:22:01.499 "ddgst": false 00:22:01.499 }, 00:22:01.499 "method": "bdev_nvme_attach_controller" 00:22:01.499 },{ 00:22:01.499 "params": { 00:22:01.499 "name": "Nvme8", 00:22:01.499 "trtype": "tcp", 00:22:01.499 "traddr": "10.0.0.2", 00:22:01.499 "adrfam": "ipv4", 00:22:01.499 "trsvcid": "4420", 00:22:01.500 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:01.500 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:01.500 "hdgst": false, 00:22:01.500 "ddgst": false 00:22:01.500 }, 00:22:01.500 "method": "bdev_nvme_attach_controller" 00:22:01.500 },{ 00:22:01.500 "params": { 00:22:01.500 "name": "Nvme9", 00:22:01.500 "trtype": "tcp", 00:22:01.500 "traddr": "10.0.0.2", 00:22:01.500 "adrfam": "ipv4", 00:22:01.500 "trsvcid": "4420", 00:22:01.500 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:01.500 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:01.500 "hdgst": false, 00:22:01.500 "ddgst": false 00:22:01.500 }, 00:22:01.500 "method": "bdev_nvme_attach_controller" 00:22:01.500 },{ 00:22:01.500 "params": { 00:22:01.500 "name": "Nvme10", 00:22:01.500 "trtype": "tcp", 00:22:01.500 "traddr": "10.0.0.2", 00:22:01.500 "adrfam": "ipv4", 00:22:01.500 "trsvcid": "4420", 00:22:01.500 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:01.500 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:01.500 "hdgst": false, 00:22:01.500 "ddgst": false 00:22:01.500 }, 00:22:01.500 "method": "bdev_nvme_attach_controller" 00:22:01.500 }' 00:22:01.500 [2024-07-27 01:29:53.101655] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:01.500 [2024-07-27 01:29:53.101740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697830 ] 00:22:01.500 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.500 [2024-07-27 01:29:53.163841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.758 [2024-07-27 01:29:53.272851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.128 Running I/O for 10 seconds... 00:22:04.063 01:29:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:04.063 01:29:55 -- common/autotest_common.sh@852 -- # return 0 00:22:04.063 01:29:55 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:04.063 01:29:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.063 01:29:55 -- common/autotest_common.sh@10 -- # set +x 00:22:04.063 01:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.063 01:29:55 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:04.063 01:29:55 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:04.063 01:29:55 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:04.063 01:29:55 -- target/shutdown.sh@57 -- # local ret=1 00:22:04.063 01:29:55 -- target/shutdown.sh@58 -- # local i 00:22:04.063 01:29:55 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:04.063 01:29:55 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:04.063 01:29:55 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:04.063 01:29:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.063 01:29:55 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:04.063 01:29:55 -- common/autotest_common.sh@10 -- # set +x 00:22:04.063 01:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.063 01:29:55 -- target/shutdown.sh@60 -- # read_io_count=257 00:22:04.063 01:29:55 -- target/shutdown.sh@63 -- # '[' 257 -ge 100 ']' 00:22:04.063 01:29:55 -- target/shutdown.sh@64 -- # ret=0 00:22:04.063 01:29:55 -- target/shutdown.sh@65 -- # break 00:22:04.063 01:29:55 -- target/shutdown.sh@69 -- # return 0 00:22:04.063 01:29:55 -- target/shutdown.sh@109 -- # killprocess 697830 00:22:04.063 01:29:55 -- common/autotest_common.sh@926 -- # '[' -z 697830 ']' 00:22:04.063 01:29:55 -- common/autotest_common.sh@930 -- # kill -0 697830 00:22:04.063 01:29:55 -- common/autotest_common.sh@931 -- # uname 00:22:04.063 01:29:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:04.063 01:29:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 697830 00:22:04.063 01:29:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:04.063 01:29:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:04.064 01:29:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 697830' 00:22:04.064 killing process with pid 697830 00:22:04.064 01:29:55 -- common/autotest_common.sh@945 -- # kill 697830 00:22:04.064 01:29:55 -- common/autotest_common.sh@950 -- # wait 697830 00:22:04.064 Received shutdown signal, test time was about 0.886351 seconds 00:22:04.064 00:22:04.064 Latency(us) 00:22:04.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.064 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme1n1 : 0.84 426.39 26.65 0.00 0.00 146461.91 22719.15 128159.29 00:22:04.064 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme2n1 : 0.85 370.72 23.17 0.00 0.00 167760.45 15534.46 163111.82 00:22:04.064 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme3n1 : 0.83 426.24 26.64 0.00 0.00 143561.10 24855.13 114955.00 00:22:04.064 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme4n1 : 0.82 386.05 24.13 0.00 0.00 157641.47 19612.25 119615.34 00:22:04.064 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme5n1 : 0.84 423.63 26.48 0.00 0.00 141564.60 26408.58 135926.52 00:22:04.064 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme6n1 : 0.85 371.71 23.23 0.00 0.00 160641.72 22622.06 142917.03 00:22:04.064 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme7n1 : 0.89 400.90 25.06 0.00 0.00 141689.45 18447.17 119615.34 00:22:04.064 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme8n1 : 0.83 393.36 24.58 0.00 0.00 148044.07 5267.15 121945.51 00:22:04.064 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme9n1 : 0.84 424.66 26.54 0.00 0.00 135936.64 26214.40 110294.66 00:22:04.064 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.064 Verification LBA range: start 0x0 length 0x400 00:22:04.064 Nvme10n1 : 0.85 367.77 22.99 0.00 0.00 158168.18 8446.86 173209.22 00:22:04.064 =================================================================================================================== 00:22:04.064 Total : 3991.43 249.46 0.00 0.00 149640.08 5267.15 173209.22 00:22:04.321 01:29:56 -- target/shutdown.sh@112 -- # sleep 1 00:22:05.694 01:29:57 -- target/shutdown.sh@113 -- # kill -0 697636 00:22:05.694 01:29:57 -- target/shutdown.sh@115 -- # stoptarget 00:22:05.694 01:29:57 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:05.694 01:29:57 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:05.694 01:29:57 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:05.694 01:29:57 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:05.694 01:29:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:05.694 01:29:57 -- nvmf/common.sh@116 -- # sync 00:22:05.694 01:29:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:05.694 01:29:57 -- nvmf/common.sh@119 -- # set +e 00:22:05.694 01:29:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:05.694 01:29:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:05.694 rmmod nvme_tcp 00:22:05.694 rmmod nvme_fabrics 00:22:05.694 rmmod nvme_keyring 00:22:05.694 01:29:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:05.694 01:29:57 -- nvmf/common.sh@123 -- # set -e 00:22:05.694 01:29:57 -- nvmf/common.sh@124 -- # return 0 00:22:05.694 01:29:57 -- nvmf/common.sh@477 -- # '[' -n 697636 ']' 00:22:05.694 01:29:57 -- nvmf/common.sh@478 -- # killprocess 697636 00:22:05.694 01:29:57 -- common/autotest_common.sh@926 -- # '[' -z 697636 ']' 00:22:05.694 01:29:57 -- common/autotest_common.sh@930 -- # kill -0 697636 00:22:05.694 01:29:57 -- common/autotest_common.sh@931 -- # uname 00:22:05.694 01:29:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:05.694 01:29:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 697636 00:22:05.694 01:29:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:05.694 01:29:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:05.694 01:29:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 697636' 00:22:05.694 killing process with pid 697636 00:22:05.694 01:29:57 -- common/autotest_common.sh@945 -- # kill 697636 00:22:05.694 01:29:57 -- common/autotest_common.sh@950 -- # wait 697636 00:22:05.953 01:29:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:05.953 01:29:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:05.953 01:29:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:05.953 01:29:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:05.953 01:29:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:05.953 01:29:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.953 01:29:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.953 01:29:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.484 01:29:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:08.484 00:22:08.484 real 0m8.384s 00:22:08.484 user 0m26.448s 00:22:08.484 sys 0m1.572s 00:22:08.484 01:29:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.484 01:29:59 -- common/autotest_common.sh@10 -- # set +x 00:22:08.484 ************************************ 00:22:08.484 END TEST nvmf_shutdown_tc2 00:22:08.484 ************************************ 00:22:08.484 01:29:59 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:08.484 01:29:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:08.484 01:29:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:08.484 01:29:59 -- common/autotest_common.sh@10 -- # set +x 00:22:08.484 ************************************ 00:22:08.484 START TEST nvmf_shutdown_tc3 00:22:08.484 ************************************ 00:22:08.484 01:29:59 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:22:08.484 01:29:59 -- target/shutdown.sh@120 -- # starttarget 00:22:08.484 01:29:59 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:08.484 01:29:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:08.484 01:29:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.484 01:29:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:08.484 01:29:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:08.484 01:29:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:08.484 01:29:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.484 01:29:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.484 01:29:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.484 01:29:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:08.484 01:29:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:08.484 01:29:59 -- common/autotest_common.sh@10 -- # set +x 00:22:08.484 01:29:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:08.484 01:29:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:08.484 01:29:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:08.484 01:29:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:08.484 01:29:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:08.484 01:29:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:08.484 01:29:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:08.484 01:29:59 -- nvmf/common.sh@294 -- # net_devs=() 00:22:08.484 01:29:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:08.484 01:29:59 -- nvmf/common.sh@295 -- # e810=() 00:22:08.484 01:29:59 -- nvmf/common.sh@295 -- # local -ga e810 00:22:08.484 01:29:59 -- nvmf/common.sh@296 -- # x722=() 00:22:08.484 01:29:59 -- nvmf/common.sh@296 -- # local -ga x722 00:22:08.484 01:29:59 -- nvmf/common.sh@297 -- # mlx=() 00:22:08.484 01:29:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:08.484 01:29:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.484 01:29:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:08.484 01:29:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:08.484 01:29:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:08.484 01:29:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:08.484 01:29:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:08.484 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:08.484 01:29:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:08.484 01:29:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:08.484 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:08.484 01:29:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:08.484 01:29:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:08.484 01:29:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:08.484 01:29:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.485 01:29:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:08.485 01:29:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.485 01:29:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:08.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:08.485 01:29:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.485 01:29:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:08.485 01:29:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.485 01:29:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:08.485 01:29:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.485 01:29:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:08.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:08.485 01:29:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.485 01:29:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:08.485 01:29:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:08.485 01:29:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:08.485 01:29:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:08.485 01:29:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:08.485 01:29:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.485 01:29:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.485 01:29:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.485 01:29:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:08.485 01:29:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.485 01:29:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.485 01:29:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:08.485 01:29:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.485 01:29:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.485 01:29:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:08.485 01:29:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:08.485 01:29:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.485 01:29:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.485 01:29:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.485 01:29:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.485 01:29:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:08.485 01:29:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.485 01:29:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.485 01:29:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.485 01:29:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:08.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:22:08.485 00:22:08.485 --- 10.0.0.2 ping statistics --- 00:22:08.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.485 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:22:08.485 01:29:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:22:08.485 00:22:08.485 --- 10.0.0.1 ping statistics --- 00:22:08.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.485 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:22:08.485 01:29:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.485 01:29:59 -- nvmf/common.sh@410 -- # return 0 00:22:08.485 01:29:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:08.485 01:29:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.485 01:29:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:08.485 01:29:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:08.485 01:29:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.485 01:29:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:08.485 01:29:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:08.485 01:29:59 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:08.485 01:29:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:08.485 01:29:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:08.485 01:29:59 -- common/autotest_common.sh@10 -- # set +x 00:22:08.485 01:29:59 -- nvmf/common.sh@469 -- # nvmfpid=698767 00:22:08.485 01:29:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:08.485 01:29:59 -- nvmf/common.sh@470 -- # waitforlisten 698767 00:22:08.485 01:29:59 -- common/autotest_common.sh@819 -- # '[' -z 698767 ']' 00:22:08.485 01:29:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.485 01:29:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:08.485 01:29:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.485 01:29:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:08.485 01:29:59 -- common/autotest_common.sh@10 -- # set +x 00:22:08.485 [2024-07-27 01:29:59.919864] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:08.485 [2024-07-27 01:29:59.919951] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.485 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.485 [2024-07-27 01:29:59.989474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.485 [2024-07-27 01:30:00.111845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:08.485 [2024-07-27 01:30:00.112021] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.485 [2024-07-27 01:30:00.112038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.485 [2024-07-27 01:30:00.112086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.485 [2024-07-27 01:30:00.112182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.485 [2024-07-27 01:30:00.112301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.485 [2024-07-27 01:30:00.112377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:08.485 [2024-07-27 01:30:00.112381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.418 01:30:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:09.418 01:30:00 -- common/autotest_common.sh@852 -- # return 0 00:22:09.418 01:30:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:09.418 01:30:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:09.418 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:22:09.418 01:30:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.418 01:30:00 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.418 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.418 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:22:09.418 [2024-07-27 01:30:00.895516] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.418 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.418 01:30:00 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:09.418 01:30:00 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:09.418 01:30:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:09.418 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:22:09.418 01:30:00 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.418 01:30:00 -- target/shutdown.sh@28 -- # cat 00:22:09.418 01:30:00 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:09.418 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.418 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:22:09.418 Malloc1 00:22:09.418 [2024-07-27 01:30:00.970792] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.418 Malloc2 00:22:09.418 Malloc3 00:22:09.418 Malloc4 00:22:09.418 Malloc5 00:22:09.677 Malloc6 00:22:09.677 Malloc7 00:22:09.677 Malloc8 00:22:09.677 Malloc9 00:22:09.677 Malloc10 00:22:09.677 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.677 01:30:01 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:09.677 01:30:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:09.677 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:22:09.677 01:30:01 -- target/shutdown.sh@124 -- # perfpid=699089 00:22:09.677 01:30:01 -- target/shutdown.sh@125 -- # waitforlisten 699089 /var/tmp/bdevperf.sock 00:22:09.677 01:30:01 -- common/autotest_common.sh@819 -- # '[' -z 699089 ']' 00:22:09.677 01:30:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.677 01:30:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.677 01:30:01 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:09.677 01:30:01 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:09.677 01:30:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.677 01:30:01 -- nvmf/common.sh@520 -- # config=() 00:22:09.677 01:30:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.677 01:30:01 -- nvmf/common.sh@520 -- # local subsystem config 00:22:09.677 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:22:09.677 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.677 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.677 { 00:22:09.677 "params": { 00:22:09.677 "name": "Nvme$subsystem", 00:22:09.677 "trtype": "$TEST_TRANSPORT", 00:22:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.677 "adrfam": "ipv4", 00:22:09.677 "trsvcid": "$NVMF_PORT", 00:22:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.677 "hdgst": ${hdgst:-false}, 00:22:09.677 "ddgst": ${ddgst:-false} 00:22:09.677 }, 00:22:09.677 "method": "bdev_nvme_attach_controller" 00:22:09.677 } 00:22:09.677 EOF 00:22:09.677 )") 00:22:09.677 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.677 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.677 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.677 { 00:22:09.677 "params": { 00:22:09.677 "name": "Nvme$subsystem", 00:22:09.677 "trtype": "$TEST_TRANSPORT", 00:22:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.677 "adrfam": "ipv4", 00:22:09.677 "trsvcid": "$NVMF_PORT", 00:22:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.677 "hdgst": ${hdgst:-false}, 00:22:09.677 "ddgst": ${ddgst:-false} 00:22:09.677 }, 00:22:09.677 "method": "bdev_nvme_attach_controller" 00:22:09.677 } 00:22:09.677 EOF 00:22:09.677 )") 00:22:09.677 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.677 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.677 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.677 { 00:22:09.677 "params": { 00:22:09.677 "name": "Nvme$subsystem", 00:22:09.677 "trtype": "$TEST_TRANSPORT", 00:22:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.677 "adrfam": "ipv4", 00:22:09.677 "trsvcid": "$NVMF_PORT", 00:22:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.677 "hdgst": ${hdgst:-false}, 00:22:09.677 "ddgst": ${ddgst:-false} 00:22:09.677 }, 00:22:09.677 "method": "bdev_nvme_attach_controller" 00:22:09.677 } 00:22:09.677 EOF 00:22:09.677 )") 00:22:09.935 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.936 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.936 { 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme$subsystem", 00:22:09.936 "trtype": "$TEST_TRANSPORT", 00:22:09.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "$NVMF_PORT", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.936 "hdgst": ${hdgst:-false}, 00:22:09.936 "ddgst": ${ddgst:-false} 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 } 00:22:09.936 EOF 00:22:09.936 )") 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.936 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.936 { 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme$subsystem", 00:22:09.936 "trtype": "$TEST_TRANSPORT", 00:22:09.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "$NVMF_PORT", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.936 "hdgst": ${hdgst:-false}, 00:22:09.936 "ddgst": ${ddgst:-false} 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 } 00:22:09.936 EOF 00:22:09.936 )") 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.936 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.936 { 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme$subsystem", 00:22:09.936 "trtype": "$TEST_TRANSPORT", 00:22:09.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "$NVMF_PORT", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.936 "hdgst": ${hdgst:-false}, 00:22:09.936 "ddgst": ${ddgst:-false} 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 } 00:22:09.936 EOF 00:22:09.936 )") 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.936 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.936 { 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme$subsystem", 00:22:09.936 "trtype": "$TEST_TRANSPORT", 00:22:09.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "$NVMF_PORT", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.936 "hdgst": ${hdgst:-false}, 00:22:09.936 "ddgst": ${ddgst:-false} 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 } 00:22:09.936 EOF 00:22:09.936 )") 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.936 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.936 { 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme$subsystem", 00:22:09.936 "trtype": "$TEST_TRANSPORT", 00:22:09.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "$NVMF_PORT", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.936 "hdgst": ${hdgst:-false}, 00:22:09.936 "ddgst": ${ddgst:-false} 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 } 00:22:09.936 EOF 00:22:09.936 )") 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.936 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.936 { 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme$subsystem", 00:22:09.936 "trtype": "$TEST_TRANSPORT", 00:22:09.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "$NVMF_PORT", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.936 "hdgst": ${hdgst:-false}, 00:22:09.936 "ddgst": ${ddgst:-false} 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 } 00:22:09.936 EOF 00:22:09.936 )") 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.936 01:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:09.936 { 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme$subsystem", 00:22:09.936 "trtype": "$TEST_TRANSPORT", 00:22:09.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "$NVMF_PORT", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.936 "hdgst": ${hdgst:-false}, 00:22:09.936 "ddgst": ${ddgst:-false} 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 } 00:22:09.936 EOF 00:22:09.936 )") 00:22:09.936 01:30:01 -- nvmf/common.sh@542 -- # cat 00:22:09.936 01:30:01 -- nvmf/common.sh@544 -- # jq . 00:22:09.936 01:30:01 -- nvmf/common.sh@545 -- # IFS=, 00:22:09.936 01:30:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme1", 00:22:09.936 "trtype": "tcp", 00:22:09.936 "traddr": "10.0.0.2", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "4420", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.936 "hdgst": false, 00:22:09.936 "ddgst": false 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 },{ 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme2", 00:22:09.936 "trtype": "tcp", 00:22:09.936 "traddr": "10.0.0.2", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "4420", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:09.936 "hdgst": false, 00:22:09.936 "ddgst": false 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 },{ 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme3", 00:22:09.936 "trtype": "tcp", 00:22:09.936 "traddr": "10.0.0.2", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "4420", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:09.936 "hdgst": false, 00:22:09.936 "ddgst": false 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 },{ 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme4", 00:22:09.936 "trtype": "tcp", 00:22:09.936 "traddr": "10.0.0.2", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "4420", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:09.936 "hdgst": false, 00:22:09.936 "ddgst": false 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 },{ 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme5", 00:22:09.936 "trtype": "tcp", 00:22:09.936 "traddr": "10.0.0.2", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "4420", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:09.936 "hdgst": false, 00:22:09.936 "ddgst": false 00:22:09.936 }, 00:22:09.936 "method": "bdev_nvme_attach_controller" 00:22:09.936 },{ 00:22:09.936 "params": { 00:22:09.936 "name": "Nvme6", 00:22:09.936 "trtype": "tcp", 00:22:09.936 "traddr": "10.0.0.2", 00:22:09.936 "adrfam": "ipv4", 00:22:09.936 "trsvcid": "4420", 00:22:09.936 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:09.936 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:09.936 "hdgst": false, 00:22:09.937 "ddgst": false 00:22:09.937 }, 00:22:09.937 "method": "bdev_nvme_attach_controller" 00:22:09.937 },{ 00:22:09.937 "params": { 00:22:09.937 "name": "Nvme7", 00:22:09.937 "trtype": "tcp", 00:22:09.937 "traddr": "10.0.0.2", 00:22:09.937 "adrfam": "ipv4", 00:22:09.937 "trsvcid": "4420", 00:22:09.937 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:09.937 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:09.937 "hdgst": false, 00:22:09.937 "ddgst": false 00:22:09.937 }, 00:22:09.937 "method": "bdev_nvme_attach_controller" 00:22:09.937 },{ 00:22:09.937 "params": { 00:22:09.937 "name": "Nvme8", 00:22:09.937 "trtype": "tcp", 00:22:09.937 "traddr": "10.0.0.2", 00:22:09.937 "adrfam": "ipv4", 00:22:09.937 "trsvcid": "4420", 00:22:09.937 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:09.937 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:09.937 "hdgst": false, 00:22:09.937 "ddgst": false 00:22:09.937 }, 00:22:09.937 "method": "bdev_nvme_attach_controller" 00:22:09.937 },{ 00:22:09.937 "params": { 00:22:09.937 "name": "Nvme9", 00:22:09.937 "trtype": "tcp", 00:22:09.937 "traddr": "10.0.0.2", 00:22:09.937 "adrfam": "ipv4", 00:22:09.937 "trsvcid": "4420", 00:22:09.937 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:09.937 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:09.937 "hdgst": false, 00:22:09.937 "ddgst": false 00:22:09.937 }, 00:22:09.937 "method": "bdev_nvme_attach_controller" 00:22:09.937 },{ 00:22:09.937 "params": { 00:22:09.937 "name": "Nvme10", 00:22:09.937 "trtype": "tcp", 00:22:09.937 "traddr": "10.0.0.2", 00:22:09.937 "adrfam": "ipv4", 00:22:09.937 "trsvcid": "4420", 00:22:09.937 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:09.937 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:09.937 "hdgst": false, 00:22:09.937 "ddgst": false 00:22:09.937 }, 00:22:09.937 "method": "bdev_nvme_attach_controller" 00:22:09.937 }' 00:22:09.937 [2024-07-27 01:30:01.461483] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:09.937 [2024-07-27 01:30:01.461571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699089 ] 00:22:09.937 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.937 [2024-07-27 01:30:01.528780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.937 [2024-07-27 01:30:01.638107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.833 Running I/O for 10 seconds... 00:22:12.419 01:30:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:12.419 01:30:03 -- common/autotest_common.sh@852 -- # return 0 00:22:12.419 01:30:03 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:12.419 01:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.419 01:30:03 -- common/autotest_common.sh@10 -- # set +x 00:22:12.419 01:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.419 01:30:03 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.419 01:30:03 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:12.419 01:30:03 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:12.419 01:30:03 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:12.419 01:30:03 -- target/shutdown.sh@57 -- # local ret=1 00:22:12.419 01:30:03 -- target/shutdown.sh@58 -- # local i 00:22:12.419 01:30:03 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:12.419 01:30:03 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:12.419 01:30:03 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:12.419 01:30:03 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:12.419 01:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.419 01:30:03 -- common/autotest_common.sh@10 -- # set +x 00:22:12.419 01:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.419 01:30:03 -- target/shutdown.sh@60 -- # read_io_count=211 00:22:12.419 01:30:03 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:22:12.419 01:30:03 -- target/shutdown.sh@64 -- # ret=0 00:22:12.419 01:30:03 -- target/shutdown.sh@65 -- # break 00:22:12.419 01:30:03 -- target/shutdown.sh@69 -- # return 0 00:22:12.419 01:30:03 -- target/shutdown.sh@134 -- # killprocess 698767 00:22:12.419 01:30:03 -- common/autotest_common.sh@926 -- # '[' -z 698767 ']' 00:22:12.419 01:30:03 -- common/autotest_common.sh@930 -- # kill -0 698767 00:22:12.419 01:30:03 -- common/autotest_common.sh@931 -- # uname 00:22:12.419 01:30:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:12.419 01:30:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 698767 00:22:12.419 01:30:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:12.419 01:30:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:12.419 01:30:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 698767' 00:22:12.419 killing process with pid 698767 00:22:12.419 01:30:03 -- common/autotest_common.sh@945 -- # kill 698767 00:22:12.419 01:30:03 -- common/autotest_common.sh@950 -- # wait 698767 00:22:12.419 [2024-07-27 01:30:03.963957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.419 [2024-07-27 01:30:03.964508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.964920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0100 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.420 [2024-07-27 01:30:03.968814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.968990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.969002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.969015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.969027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.969040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0590 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.970894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.970926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.970941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.970954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.970967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.970980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.970993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.421 [2024-07-27 01:30:03.971657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.971669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0a40 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.972942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.972985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2309f70 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-27 01:30:03.973276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:12.422 the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-27 01:30:03.973291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23424f0 is same [2024-07-27 01:30:03.973306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with with the state(5) to be set 00:22:12.422 the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-27 01:30:03.973371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:12.422 the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-27 01:30:03.973480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cda60 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-27 01:30:03.973547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:12.422 the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with [2024-07-27 01:30:03.973563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:12.422 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.422 [2024-07-27 01:30:03.973644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-07-27 01:30:03.973657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226aa50 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.422 [2024-07-27 01:30:03.973699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.423 [2024-07-27 01:30:03.973736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.973775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.423 [2024-07-27 01:30:03.973804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.973830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.423 [2024-07-27 01:30:03.973857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.973882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.423 [2024-07-27 01:30:03.973895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.973907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2307210 is same [2024-07-27 01:30:03.973920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with with the state(5) to be set 00:22:12.423 the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.973990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.974002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d0ed0 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.974984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-27 01:30:03.975002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with [2024-07-27 01:30:03.975011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:12.423 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.975032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128[2024-07-27 01:30:03.975047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with [2024-07-27 01:30:03.975073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:12.423 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.975090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with [2024-07-27 01:30:03.975106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:12the state(5) to be set 00:22:12.423 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-27 01:30:03.975121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.975134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-27 01:30:03.975148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.975161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29696 len:1[2024-07-27 01:30:03.975174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.975208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-27 01:30:03.975221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.975234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29952 len:1[2024-07-27 01:30:03.975247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 01:30:03.975262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-27 01:30:03.975291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.975304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-27 01:30:03.975317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-07-27 01:30:03.975330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-07-27 01:30:03.975356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.423 [2024-07-27 01:30:03.975365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30464 len:12[2024-07-27 01:30:03.975382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 01:30:03.975402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24576 len:12[2024-07-27 01:30:03.975487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 01:30:03.975503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with [2024-07-27 01:30:03.975582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25600 len:128the state(5) to be set 00:22:12.424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with [2024-07-27 01:30:03.975651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26496 len:12the state(5) to be set 00:22:12.424 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with [2024-07-27 01:30:03.975718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26880 len:12the state(5) to be set 00:22:12.424 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-07-27 01:30:03.975759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30720 len:12[2024-07-27 01:30:03.975785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with [2024-07-27 01:30:03.975802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:12.424 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-07-27 01:30:03.975816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.424 [2024-07-27 01:30:03.975820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.975829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.425 [2024-07-27 01:30:03.975835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.975843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.425 [2024-07-27 01:30:03.975853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30976 len:128[2024-07-27 01:30:03.975855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 the state(5) to be set 00:22:12.425 [2024-07-27 01:30:03.975869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with [2024-07-27 01:30:03.975870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:12.425 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.975884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.425 [2024-07-27 01:30:03.975888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.975897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1360 is same with the state(5) to be set 00:22:12.425 [2024-07-27 01:30:03.975902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.975919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.975933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.975950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.975964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.975980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.975995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-07-27 01:30:03.976881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-07-27 01:30:03.976896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-07-27 01:30:03.976911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-07-27 01:30:03.976925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-07-27 01:30:03.976941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-07-27 01:30:03.976954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-07-27 01:30:03.976969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-07-27 01:30:03.976983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-07-27 01:30:03.976998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-07-27 01:30:03.977019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-07-27 01:30:03.977034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-07-27 01:30:03.977084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-07-27 01:30:03.977103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34560 len:1[2024-07-27 01:30:03.977098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-07-27 01:30:03.977125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-07-27 01:30:03.977149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-07-27 01:30:03.977162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with [2024-07-27 01:30:03.977174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x34f9a70 is same the state(5) to be set 00:22:12.426 with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977714] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x34f9a70 was disconnected and freed. reset controller. 00:22:12.426 [2024-07-27 01:30:03.977725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.426 [2024-07-27 01:30:03.977899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.977911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.977926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.977939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1810 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controll[2024-07-27 01:30:03.979722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with er 00:22:12.427 the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23424f0 (9): Bad file descriptor 00:22:12.427 [2024-07-27 01:30:03.979762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.979997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.980297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d1ca0 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.981720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.981750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.981765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.981778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.981791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.981804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.981817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.981830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.427 [2024-07-27 01:30:03.981842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981907] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.428 [2024-07-27 01:30:03.981917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.981988] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.428 [2024-07-27 01:30:03.982002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d2130 is same with the state(5) to be set 00:22:12.428 [2024-07-27 01:30:03.982623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.982704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.982749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.982783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.982813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.982856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.982893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.982923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.982953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.982983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.982996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.983012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.983026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.428 [2024-07-27 01:30:03.983042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.428 [2024-07-27 01:30:03.983056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with [2024-07-27 01:30:03.983270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:12the state(5) to be set 00:22:12.429 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with [2024-07-27 01:30:03.983286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:12.429 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 01:30:03.983351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27904 len:12[2024-07-27 01:30:03.983410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 01:30:03.983424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with [2024-07-27 01:30:03.983463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:12.429 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:12[2024-07-27 01:30:03.983543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with [2024-07-27 01:30:03.983557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:12.429 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.429 [2024-07-27 01:30:03.983612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.429 [2024-07-27 01:30:03.983625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.429 [2024-07-27 01:30:03.983638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with [2024-07-27 01:30:03.983637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30720 len:1the state(5) to be set 00:22:12.429 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30976 len:12[2024-07-27 01:30:03.983707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 01:30:03.983721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with [2024-07-27 01:30:03.983803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31360 len:12the state(5) to be set 00:22:12.430 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with [2024-07-27 01:30:03.983939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31872 len:1the state(5) to be set 00:22:12.430 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.983981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.983988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.983993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.984004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32128 len:12[2024-07-27 01:30:03.984006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.984021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with [2024-07-27 01:30:03.984021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:12.430 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.984035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.984039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.984066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 01:30:03.984068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.984084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.984088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.984097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.430 [2024-07-27 01:30:03.984103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.984120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.984134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.984150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.984165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.984181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.984195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.984210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.984225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.984241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.984255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.984271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.984285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.984301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.430 [2024-07-27 01:30:03.984315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.430 [2024-07-27 01:30:03.984330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.984790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.984804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f260 is same with the state(5) to be set 00:22:12.431 [2024-07-27 01:30:03.984878] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x247f260 was disconnected and freed. reset controller. 00:22:12.431 [2024-07-27 01:30:03.984935] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.431 [2024-07-27 01:30:03.985011] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.431 [2024-07-27 01:30:03.985318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2309f70 (9): Bad file descriptor 00:22:12.431 [2024-07-27 01:30:03.985381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c88c0 is same with the state(5) to be set 00:22:12.431 [2024-07-27 01:30:03.985530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cda60 (9): Bad file descriptor 00:22:12.431 [2024-07-27 01:30:03.985559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226aa50 (9): Bad file descriptor 00:22:12.431 [2024-07-27 01:30:03.985608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c8490 is same with the state(5) to be set 00:22:12.431 [2024-07-27 01:30:03.985770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.985895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.985908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300a30 is same with the state(5) to be set 00:22:12.431 [2024-07-27 01:30:03.985949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2307210 (9): Bad file descriptor 00:22:12.431 [2024-07-27 01:30:03.985997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.986017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.986033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.986047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.986068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.986084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.986098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.431 [2024-07-27 01:30:03.986112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.431 [2024-07-27 01:30:03.986126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300e60 is same with the state(5) to be set 00:22:12.431 [2024-07-27 01:30:03.987329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.431 [2024-07-27 01:30:03.987368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.987977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.987992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.432 [2024-07-27 01:30:03.988546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.432 [2024-07-27 01:30:03.988560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.988956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:03.988969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:03.995011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.433 [2024-07-27 01:30:03.995041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.433 [2024-07-27 01:30:03.995056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d25c0 is same with the state(5) to be set 00:22:12.433 [2024-07-27 01:30:04.005586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.005975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.005988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.006005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.006019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.006035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.433 [2024-07-27 01:30:04.006049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.006074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2480820 is same with the state(5) to be set 00:22:12.433 [2024-07-27 01:30:04.006230] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2480820 was disconnected and freed. reset controller. 00:22:12.433 [2024-07-27 01:30:04.006424] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.433 [2024-07-27 01:30:04.006682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:12.433 [2024-07-27 01:30:04.006857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23424f0 (9): Bad file descriptor 00:22:12.433 [2024-07-27 01:30:04.006896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c88c0 (9): Bad file descriptor 00:22:12.433 [2024-07-27 01:30:04.006936] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.433 [2024-07-27 01:30:04.006960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c8490 (9): Bad file descriptor 00:22:12.433 [2024-07-27 01:30:04.006989] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300a30 (9): Bad file descriptor 00:22:12.433 [2024-07-27 01:30:04.007046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.433 [2024-07-27 01:30:04.007075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.007093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.433 [2024-07-27 01:30:04.007115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.007129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.433 [2024-07-27 01:30:04.007144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.433 [2024-07-27 01:30:04.007159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.434 [2024-07-27 01:30:04.007173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.007187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248b830 is same with the state(5) to be set 00:22:12.434 [2024-07-27 01:30:04.007225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300e60 (9): Bad file descriptor 00:22:12.434 [2024-07-27 01:30:04.008502] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.434 [2024-07-27 01:30:04.008658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:12.434 [2024-07-27 01:30:04.008911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.434 [2024-07-27 01:30:04.009072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.434 [2024-07-27 01:30:04.009110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2309f70 with addr=10.0.0.2, port=4420 00:22:12.434 [2024-07-27 01:30:04.009128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2309f70 is same with the state(5) to be set 00:22:12.434 [2024-07-27 01:30:04.009183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.009980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.009994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.010010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.434 [2024-07-27 01:30:04.010024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.434 [2024-07-27 01:30:04.010040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.010976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.010992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.011006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.011022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.011036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.011053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.011154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.011174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.011190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.011206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.011220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.435 [2024-07-27 01:30:04.011237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.435 [2024-07-27 01:30:04.011251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.011267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.011281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.011296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ef40 is same with the state(5) to be set 00:22:12.436 [2024-07-27 01:30:04.012557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.012973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.012987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.436 [2024-07-27 01:30:04.013647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.436 [2024-07-27 01:30:04.013661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.013970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.013987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.437 [2024-07-27 01:30:04.014574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-07-27 01:30:04.014592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f6e70 is same with the state(5) to be set 00:22:12.437 [2024-07-27 01:30:04.016340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.437 [2024-07-27 01:30:04.016385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:12.437 [2024-07-27 01:30:04.016637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.437 [2024-07-27 01:30:04.016790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.437 [2024-07-27 01:30:04.016815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226aa50 with addr=10.0.0.2, port=4420 00:22:12.437 [2024-07-27 01:30:04.016832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226aa50 is same with the state(5) to be set 00:22:12.437 [2024-07-27 01:30:04.016859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2309f70 (9): Bad file descriptor 00:22:12.437 [2024-07-27 01:30:04.016879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:12.437 [2024-07-27 01:30:04.016893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:12.437 [2024-07-27 01:30:04.016910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:12.437 [2024-07-27 01:30:04.017391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.437 [2024-07-27 01:30:04.017561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.437 [2024-07-27 01:30:04.017737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.437 [2024-07-27 01:30:04.017762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2307210 with addr=10.0.0.2, port=4420 00:22:12.437 [2024-07-27 01:30:04.017778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2307210 is same with the state(5) to be set 00:22:12.437 [2024-07-27 01:30:04.017946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.438 [2024-07-27 01:30:04.018186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.438 [2024-07-27 01:30:04.018211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cda60 with addr=10.0.0.2, port=4420 00:22:12.438 [2024-07-27 01:30:04.018228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cda60 is same with the state(5) to be set 00:22:12.438 [2024-07-27 01:30:04.018248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226aa50 (9): Bad file descriptor 00:22:12.438 [2024-07-27 01:30:04.018266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:12.438 [2024-07-27 01:30:04.018280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:12.438 [2024-07-27 01:30:04.018294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:12.438 [2024-07-27 01:30:04.018359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248b830 (9): Bad file descriptor 00:22:12.438 [2024-07-27 01:30:04.018400] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.438 [2024-07-27 01:30:04.019011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.019963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.019979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.020001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.020018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.020039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.020056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.020086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.020106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.020131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.020148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.020169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.020185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.020206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.020223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.020243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.020260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-07-27 01:30:04.020280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-07-27 01:30:04.020296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.020973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.020993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-07-27 01:30:04.021472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-07-27 01:30:04.021488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.021513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.021530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.022368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244add0 is same with the state(5) to be set 00:22:12.440 [2024-07-27 01:30:04.022463] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x244add0 was disconnected and freed. reset controller. 00:22:12.440 [2024-07-27 01:30:04.022480] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.440 [2024-07-27 01:30:04.022561] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.440 [2024-07-27 01:30:04.022597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.440 [2024-07-27 01:30:04.022653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2307210 (9): Bad file descriptor 00:22:12.440 [2024-07-27 01:30:04.022678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cda60 (9): Bad file descriptor 00:22:12.440 [2024-07-27 01:30:04.022696] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:12.440 [2024-07-27 01:30:04.022710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:12.440 [2024-07-27 01:30:04.022725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:12.440 [2024-07-27 01:30:04.022751] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.440 [2024-07-27 01:30:04.022862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.022884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.022908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.022924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.022943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.022958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.022974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.022988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.440 [2024-07-27 01:30:04.023809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.440 [2024-07-27 01:30:04.023825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.023838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.023855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.023869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.023885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.023900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.023916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.023933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.023950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.023964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.023980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.023994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.024880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.024895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446c30 is same with the state(5) to be set 00:22:12.441 [2024-07-27 01:30:04.026137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.026160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.441 [2024-07-27 01:30:04.026181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.441 [2024-07-27 01:30:04.026197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.026980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.026997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.442 [2024-07-27 01:30:04.027324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.442 [2024-07-27 01:30:04.027347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.027985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.027999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.028015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.028029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.028045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.028072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.028091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.028116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.028132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.028147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.028162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.028177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.028191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448210 is same with the state(5) to be set 00:22:12.443 [2024-07-27 01:30:04.029427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.029450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.029471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.029487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.029504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.029518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.029535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.029549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.029565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.029588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.029604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.029619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.443 [2024-07-27 01:30:04.029635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.443 [2024-07-27 01:30:04.029650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.029974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.029989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.444 [2024-07-27 01:30:04.030662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.444 [2024-07-27 01:30:04.030678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.030983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.030997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.445 [2024-07-27 01:30:04.031465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.445 [2024-07-27 01:30:04.031480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24497f0 is same with the state(5) to be set 00:22:12.445 [2024-07-27 01:30:04.032761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:12.445 [2024-07-27 01:30:04.032794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.445 [2024-07-27 01:30:04.032813] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:12.445 [2024-07-27 01:30:04.032832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:12.445 [2024-07-27 01:30:04.032851] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:12.445 [2024-07-27 01:30:04.032918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:12.445 [2024-07-27 01:30:04.032937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:12.445 [2024-07-27 01:30:04.032961] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.445 [2024-07-27 01:30:04.032985] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:12.445 [2024-07-27 01:30:04.033000] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:12.445 [2024-07-27 01:30:04.033015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:12.445 [2024-07-27 01:30:04.033041] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.445 [2024-07-27 01:30:04.033077] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.445 [2024-07-27 01:30:04.033149] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.445 [2024-07-27 01:30:04.033546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:12.445 [2024-07-27 01:30:04.033572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.445 [2024-07-27 01:30:04.033587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.445 [2024-07-27 01:30:04.033818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.445 [2024-07-27 01:30:04.033980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.445 [2024-07-27 01:30:04.034008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23424f0 with addr=10.0.0.2, port=4420 00:22:12.445 [2024-07-27 01:30:04.034026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23424f0 is same with the state(5) to be set 00:22:12.445 [2024-07-27 01:30:04.034195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.445 [2024-07-27 01:30:04.034361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.445 [2024-07-27 01:30:04.034385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c88c0 with addr=10.0.0.2, port=4420 00:22:12.445 [2024-07-27 01:30:04.034401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c88c0 is same with the state(5) to be set 00:22:12.445 [2024-07-27 01:30:04.034535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.445 [2024-07-27 01:30:04.034685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.446 [2024-07-27 01:30:04.034709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c8490 with addr=10.0.0.2, port=4420 00:22:12.446 [2024-07-27 01:30:04.034725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c8490 is same with the state(5) to be set 00:22:12.446 [2024-07-27 01:30:04.034865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.446 [2024-07-27 01:30:04.035025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.446 [2024-07-27 01:30:04.035049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300e60 with addr=10.0.0.2, port=4420 00:22:12.446 [2024-07-27 01:30:04.035073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300e60 is same with the state(5) to be set 00:22:12.446 [2024-07-27 01:30:04.035933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.035958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.035983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.446 [2024-07-27 01:30:04.036978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.446 [2024-07-27 01:30:04.036995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.447 [2024-07-27 01:30:04.037931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.447 [2024-07-27 01:30:04.037945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3356ea0 is same with the state(5) to be set 00:22:12.447 [2024-07-27 01:30:04.039834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:12.447 [2024-07-27 01:30:04.039870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:12.447 task offset: 29184 on job bdev=Nvme10n1 fails 00:22:12.447 00:22:12.447 Latency(us) 00:22:12.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.447 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.447 Job: Nvme1n1 ended in about 0.67 seconds with error 00:22:12.447 Verification LBA range: start 0x0 length 0x400 00:22:12.447 Nvme1n1 : 0.67 311.80 19.49 95.94 0.00 155827.90 97867.09 121945.51 00:22:12.447 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.447 Job: Nvme2n1 ended in about 0.67 seconds with error 00:22:12.447 Verification LBA range: start 0x0 length 0x400 00:22:12.447 Nvme2n1 : 0.67 244.64 15.29 95.47 0.00 184788.52 100973.99 166995.44 00:22:12.447 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.447 Job: Nvme3n1 ended in about 0.64 seconds with error 00:22:12.447 Verification LBA range: start 0x0 length 0x400 00:22:12.448 Nvme3n1 : 0.64 323.97 20.25 99.68 0.00 146386.58 78449.02 125829.12 00:22:12.448 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.448 Job: Nvme4n1 ended in about 0.66 seconds with error 00:22:12.448 Verification LBA range: start 0x0 length 0x400 00:22:12.448 Nvme4n1 : 0.66 313.68 19.61 96.52 0.00 149530.59 90876.59 118061.89 00:22:12.448 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.448 Job: Nvme5n1 ended in about 0.68 seconds with error 00:22:12.448 Verification LBA range: start 0x0 length 0x400 00:22:12.448 Nvme5n1 : 0.68 305.59 19.10 94.03 0.00 151968.56 78449.02 139033.41 00:22:12.448 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.448 Job: Nvme6n1 ended in about 0.68 seconds with error 00:22:12.448 Verification LBA range: start 0x0 length 0x400 00:22:12.448 Nvme6n1 : 0.68 239.79 14.99 93.58 0.00 180169.07 102527.43 148354.09 00:22:12.448 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.448 Job: Nvme7n1 ended in about 0.69 seconds with error 00:22:12.448 Verification LBA range: start 0x0 length 0x400 00:22:12.448 Nvme7n1 : 0.69 302.68 18.92 93.13 0.00 150037.46 86992.97 117285.17 00:22:12.448 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.448 Job: Nvme8n1 ended in about 0.68 seconds with error 00:22:12.448 Verification LBA range: start 0x0 length 0x400 00:22:12.448 Nvme8n1 : 0.68 307.24 19.20 94.53 0.00 145663.40 86216.25 122722.23 00:22:12.448 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.448 Job: Nvme9n1 ended in about 0.69 seconds with error 00:22:12.448 Verification LBA range: start 0x0 length 0x400 00:22:12.448 Nvme9n1 : 0.69 299.85 18.74 92.26 0.00 148137.06 78837.38 125052.40 00:22:12.448 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.448 Job: Nvme10n1 ended in about 0.63 seconds with error 00:22:12.448 Verification LBA range: start 0x0 length 0x400 00:22:12.448 Nvme10n1 : 0.63 327.86 20.49 100.88 0.00 132453.26 6092.42 121168.78 00:22:12.448 =================================================================================================================== 00:22:12.448 Total : 2977.09 186.07 956.02 0.00 153560.65 6092.42 166995.44 00:22:12.448 [2024-07-27 01:30:04.068103] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:12.448 [2024-07-27 01:30:04.068193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:12.448 [2024-07-27 01:30:04.068546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.448 [2024-07-27 01:30:04.068714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.448 [2024-07-27 01:30:04.068742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300a30 with addr=10.0.0.2, port=4420 00:22:12.448 [2024-07-27 01:30:04.068774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300a30 is same with the state(5) to be set 00:22:12.448 [2024-07-27 01:30:04.068805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23424f0 (9): Bad file descriptor 00:22:12.448 [2024-07-27 01:30:04.068830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c88c0 (9): Bad file descriptor 00:22:12.448 [2024-07-27 01:30:04.068849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c8490 (9): Bad file descriptor 00:22:12.448 [2024-07-27 01:30:04.068867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300e60 (9): Bad file descriptor 00:22:12.448 [2024-07-27 01:30:04.069236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.448 [2024-07-27 01:30:04.069379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.448 [2024-07-27 01:30:04.069412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2309f70 with addr=10.0.0.2, port=4420 00:22:12.448 [2024-07-27 01:30:04.069429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2309f70 is same with the state(5) to be set 00:22:12.448 [2024-07-27 01:30:04.069571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.448 [2024-07-27 01:30:04.069715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.448 [2024-07-27 01:30:04.069741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226aa50 with addr=10.0.0.2, port=4420 00:22:12.448 [2024-07-27 01:30:04.069757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226aa50 is same with the state(5) to be set 00:22:12.448 [2024-07-27 01:30:04.069901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.448 [2024-07-27 01:30:04.070076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.448 [2024-07-27 01:30:04.070103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248b830 with addr=10.0.0.2, port=4420 00:22:12.448 [2024-07-27 01:30:04.070119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248b830 is same with the state(5) to be set 00:22:12.448 [2024-07-27 01:30:04.070138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300a30 (9): Bad file descriptor 00:22:12.448 [2024-07-27 01:30:04.070157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:12.448 [2024-07-27 01:30:04.070171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:12.448 [2024-07-27 01:30:04.070189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:12.448 [2024-07-27 01:30:04.070210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:12.448 [2024-07-27 01:30:04.070225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:12.448 [2024-07-27 01:30:04.070239] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:12.448 [2024-07-27 01:30:04.070255] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:12.448 [2024-07-27 01:30:04.070269] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:12.448 [2024-07-27 01:30:04.070283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:12.448 [2024-07-27 01:30:04.070300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:12.448 [2024-07-27 01:30:04.070314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:12.448 [2024-07-27 01:30:04.070328] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:12.448 [2024-07-27 01:30:04.070374] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.448 [2024-07-27 01:30:04.070398] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.448 [2024-07-27 01:30:04.070418] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.448 [2024-07-27 01:30:04.070439] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.448 [2024-07-27 01:30:04.070459] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.448 [2024-07-27 01:30:04.070837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.448 [2024-07-27 01:30:04.070863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.448 [2024-07-27 01:30:04.070877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.448 [2024-07-27 01:30:04.070890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.448 [2024-07-27 01:30:04.070914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2309f70 (9): Bad file descriptor 00:22:12.448 [2024-07-27 01:30:04.070936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226aa50 (9): Bad file descriptor 00:22:12.448 [2024-07-27 01:30:04.070955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248b830 (9): Bad file descriptor 00:22:12.448 [2024-07-27 01:30:04.070972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:12.448 [2024-07-27 01:30:04.070986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:12.448 [2024-07-27 01:30:04.071000] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:12.448 [2024-07-27 01:30:04.071057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:12.448 [2024-07-27 01:30:04.071099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.448 [2024-07-27 01:30:04.071117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.448 [2024-07-27 01:30:04.071147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:12.448 [2024-07-27 01:30:04.071164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:12.449 [2024-07-27 01:30:04.071178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:12.449 [2024-07-27 01:30:04.071196] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:12.449 [2024-07-27 01:30:04.071210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:12.449 [2024-07-27 01:30:04.071224] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:12.449 [2024-07-27 01:30:04.071240] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:12.449 [2024-07-27 01:30:04.071254] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:12.449 [2024-07-27 01:30:04.071267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:12.449 [2024-07-27 01:30:04.071315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.449 [2024-07-27 01:30:04.071335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.449 [2024-07-27 01:30:04.071348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.449 [2024-07-27 01:30:04.071521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.449 [2024-07-27 01:30:04.071672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.449 [2024-07-27 01:30:04.071697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cda60 with addr=10.0.0.2, port=4420 00:22:12.449 [2024-07-27 01:30:04.071714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cda60 is same with the state(5) to be set 00:22:12.449 [2024-07-27 01:30:04.071851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.449 [2024-07-27 01:30:04.071990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.449 [2024-07-27 01:30:04.072015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2307210 with addr=10.0.0.2, port=4420 00:22:12.449 [2024-07-27 01:30:04.072031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2307210 is same with the state(5) to be set 00:22:12.449 [2024-07-27 01:30:04.072080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cda60 (9): Bad file descriptor 00:22:12.449 [2024-07-27 01:30:04.072118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2307210 (9): Bad file descriptor 00:22:12.449 [2024-07-27 01:30:04.072158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:12.449 [2024-07-27 01:30:04.072177] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:12.449 [2024-07-27 01:30:04.072192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:12.449 [2024-07-27 01:30:04.072208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:12.449 [2024-07-27 01:30:04.072223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:12.449 [2024-07-27 01:30:04.072236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.449 [2024-07-27 01:30:04.072272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.449 [2024-07-27 01:30:04.072290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.030 01:30:04 -- target/shutdown.sh@135 -- # nvmfpid= 00:22:13.030 01:30:04 -- target/shutdown.sh@138 -- # sleep 1 00:22:13.970 01:30:05 -- target/shutdown.sh@141 -- # kill -9 699089 00:22:13.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (699089) - No such process 00:22:13.970 01:30:05 -- target/shutdown.sh@141 -- # true 00:22:13.970 01:30:05 -- target/shutdown.sh@143 -- # stoptarget 00:22:13.970 01:30:05 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:13.970 01:30:05 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:13.970 01:30:05 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:13.970 01:30:05 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:13.970 01:30:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:13.970 01:30:05 -- nvmf/common.sh@116 -- # sync 00:22:13.970 01:30:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:13.970 01:30:05 -- nvmf/common.sh@119 -- # set +e 00:22:13.970 01:30:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:13.970 01:30:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:13.970 rmmod nvme_tcp 00:22:13.970 rmmod nvme_fabrics 00:22:13.970 rmmod nvme_keyring 00:22:13.970 01:30:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:13.970 01:30:05 -- nvmf/common.sh@123 -- # set -e 00:22:13.970 01:30:05 -- nvmf/common.sh@124 -- # return 0 00:22:13.970 01:30:05 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:13.970 01:30:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:13.970 01:30:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:13.970 01:30:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:13.970 01:30:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.970 01:30:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:13.970 01:30:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.970 01:30:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.970 01:30:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.534 01:30:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:16.534 00:22:16.534 real 0m8.003s 00:22:16.534 user 0m20.560s 00:22:16.534 sys 0m1.467s 00:22:16.534 01:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.534 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:22:16.534 ************************************ 00:22:16.534 END TEST nvmf_shutdown_tc3 00:22:16.534 ************************************ 00:22:16.534 01:30:07 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:22:16.534 00:22:16.534 real 0m28.856s 00:22:16.534 user 1m23.296s 00:22:16.534 sys 0m6.425s 00:22:16.534 01:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.534 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:22:16.534 ************************************ 00:22:16.534 END TEST nvmf_shutdown 00:22:16.534 ************************************ 00:22:16.534 01:30:07 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:16.534 01:30:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:16.534 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:22:16.534 01:30:07 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:16.534 01:30:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:16.534 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:22:16.534 01:30:07 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:16.534 01:30:07 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:16.534 01:30:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:16.534 01:30:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:16.534 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:22:16.534 ************************************ 00:22:16.534 START TEST nvmf_multicontroller 00:22:16.534 ************************************ 00:22:16.534 01:30:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:16.534 * Looking for test storage... 00:22:16.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.534 01:30:07 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.534 01:30:07 -- nvmf/common.sh@7 -- # uname -s 00:22:16.534 01:30:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.534 01:30:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.534 01:30:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.534 01:30:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.534 01:30:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.534 01:30:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.534 01:30:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.534 01:30:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.534 01:30:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.534 01:30:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.534 01:30:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.534 01:30:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.534 01:30:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.534 01:30:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.534 01:30:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.534 01:30:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.534 01:30:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.534 01:30:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.534 01:30:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.534 01:30:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.535 01:30:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.535 01:30:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.535 01:30:07 -- paths/export.sh@5 -- # export PATH 00:22:16.535 01:30:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.535 01:30:07 -- nvmf/common.sh@46 -- # : 0 00:22:16.535 01:30:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:16.535 01:30:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:16.535 01:30:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:16.535 01:30:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.535 01:30:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.535 01:30:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:16.535 01:30:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:16.535 01:30:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:16.535 01:30:07 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.535 01:30:07 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.535 01:30:07 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:16.535 01:30:07 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:16.535 01:30:07 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.535 01:30:07 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:16.535 01:30:07 -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:16.535 01:30:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:16.535 01:30:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.535 01:30:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:16.535 01:30:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:16.535 01:30:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:16.535 01:30:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.535 01:30:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.535 01:30:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.535 01:30:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:16.535 01:30:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:16.535 01:30:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:16.535 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:22:18.435 01:30:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:18.435 01:30:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:18.435 01:30:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:18.435 01:30:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:18.435 01:30:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:18.435 01:30:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:18.435 01:30:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:18.435 01:30:09 -- nvmf/common.sh@294 -- # net_devs=() 00:22:18.435 01:30:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:18.435 01:30:09 -- nvmf/common.sh@295 -- # e810=() 00:22:18.435 01:30:09 -- nvmf/common.sh@295 -- # local -ga e810 00:22:18.435 01:30:09 -- nvmf/common.sh@296 -- # x722=() 00:22:18.435 01:30:09 -- nvmf/common.sh@296 -- # local -ga x722 00:22:18.435 01:30:09 -- nvmf/common.sh@297 -- # mlx=() 00:22:18.435 01:30:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:18.435 01:30:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.435 01:30:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:18.435 01:30:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:18.435 01:30:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:18.435 01:30:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:18.435 01:30:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:18.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:18.435 01:30:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:18.435 01:30:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:18.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:18.435 01:30:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:18.435 01:30:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:18.435 01:30:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.435 01:30:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:18.435 01:30:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.435 01:30:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:18.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:18.435 01:30:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.435 01:30:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:18.435 01:30:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.435 01:30:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:18.435 01:30:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.435 01:30:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:18.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:18.435 01:30:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.435 01:30:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:18.435 01:30:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:18.435 01:30:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:18.435 01:30:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:18.436 01:30:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:18.436 01:30:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.436 01:30:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.436 01:30:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.436 01:30:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:18.436 01:30:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.436 01:30:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.436 01:30:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:18.436 01:30:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.436 01:30:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.436 01:30:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:18.436 01:30:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:18.436 01:30:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.436 01:30:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.436 01:30:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.436 01:30:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.436 01:30:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:18.436 01:30:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.436 01:30:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.436 01:30:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.436 01:30:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:18.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:22:18.436 00:22:18.436 --- 10.0.0.2 ping statistics --- 00:22:18.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.436 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:22:18.436 01:30:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:22:18.436 00:22:18.436 --- 10.0.0.1 ping statistics --- 00:22:18.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.436 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:18.436 01:30:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.436 01:30:09 -- nvmf/common.sh@410 -- # return 0 00:22:18.436 01:30:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:18.436 01:30:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.436 01:30:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:18.436 01:30:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:18.436 01:30:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.436 01:30:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:18.436 01:30:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:18.436 01:30:09 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:18.436 01:30:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:18.436 01:30:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:18.436 01:30:09 -- common/autotest_common.sh@10 -- # set +x 00:22:18.436 01:30:09 -- nvmf/common.sh@469 -- # nvmfpid=702124 00:22:18.436 01:30:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:18.436 01:30:09 -- nvmf/common.sh@470 -- # waitforlisten 702124 00:22:18.436 01:30:09 -- common/autotest_common.sh@819 -- # '[' -z 702124 ']' 00:22:18.436 01:30:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.436 01:30:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:18.436 01:30:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.436 01:30:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:18.436 01:30:09 -- common/autotest_common.sh@10 -- # set +x 00:22:18.436 [2024-07-27 01:30:10.001563] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:18.436 [2024-07-27 01:30:10.001644] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.436 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.436 [2024-07-27 01:30:10.073139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:18.436 [2024-07-27 01:30:10.186931] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:18.436 [2024-07-27 01:30:10.187121] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.436 [2024-07-27 01:30:10.187147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.436 [2024-07-27 01:30:10.187171] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.436 [2024-07-27 01:30:10.187274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.436 [2024-07-27 01:30:10.187370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.436 [2024-07-27 01:30:10.187373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.370 01:30:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:19.370 01:30:10 -- common/autotest_common.sh@852 -- # return 0 00:22:19.370 01:30:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:19.370 01:30:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:19.370 01:30:10 -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 01:30:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.370 01:30:10 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.370 01:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.370 01:30:10 -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 [2024-07-27 01:30:10.983516] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.370 01:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.370 01:30:10 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:19.370 01:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.370 01:30:10 -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 Malloc0 00:22:19.370 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.370 01:30:11 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.370 01:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.370 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.370 01:30:11 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.370 01:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.370 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.370 01:30:11 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.370 01:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.371 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.371 [2024-07-27 01:30:11.055088] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.371 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.371 01:30:11 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:19.371 01:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.371 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.371 [2024-07-27 01:30:11.062958] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:19.371 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.371 01:30:11 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:19.371 01:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.371 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.371 Malloc1 00:22:19.371 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.371 01:30:11 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:19.371 01:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.371 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.371 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.371 01:30:11 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:19.371 01:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.371 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.371 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.371 01:30:11 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:19.371 01:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.371 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.371 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.371 01:30:11 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:19.371 01:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.371 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.371 01:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.371 01:30:11 -- host/multicontroller.sh@44 -- # bdevperf_pid=702283 00:22:19.371 01:30:11 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:19.371 01:30:11 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.371 01:30:11 -- host/multicontroller.sh@47 -- # waitforlisten 702283 /var/tmp/bdevperf.sock 00:22:19.371 01:30:11 -- common/autotest_common.sh@819 -- # '[' -z 702283 ']' 00:22:19.371 01:30:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.371 01:30:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.371 01:30:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.371 01:30:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.371 01:30:11 -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 01:30:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.748 01:30:12 -- common/autotest_common.sh@852 -- # return 0 00:22:20.748 01:30:12 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:20.748 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.748 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 NVMe0n1 00:22:20.748 01:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.748 01:30:12 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:20.748 01:30:12 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:20.748 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.748 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 01:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.748 1 00:22:20.748 01:30:12 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:20.748 01:30:12 -- common/autotest_common.sh@640 -- # local es=0 00:22:20.748 01:30:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:20.748 01:30:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:20.748 01:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.748 01:30:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:20.748 01:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.748 01:30:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:20.748 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.748 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 request: 00:22:20.748 { 00:22:20.748 "name": "NVMe0", 00:22:20.748 "trtype": "tcp", 00:22:20.748 "traddr": "10.0.0.2", 00:22:20.748 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:20.748 "hostaddr": "10.0.0.2", 00:22:20.748 "hostsvcid": "60000", 00:22:20.748 "adrfam": "ipv4", 00:22:20.748 "trsvcid": "4420", 00:22:20.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.748 "method": "bdev_nvme_attach_controller", 00:22:20.748 "req_id": 1 00:22:20.748 } 00:22:20.748 Got JSON-RPC error response 00:22:20.748 response: 00:22:20.748 { 00:22:20.748 "code": -114, 00:22:20.748 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:20.748 } 00:22:20.748 01:30:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:20.748 01:30:12 -- common/autotest_common.sh@643 -- # es=1 00:22:20.748 01:30:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:20.748 01:30:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:20.748 01:30:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:20.748 01:30:12 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:20.748 01:30:12 -- common/autotest_common.sh@640 -- # local es=0 00:22:20.748 01:30:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:20.748 01:30:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:20.748 01:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.748 01:30:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:20.748 01:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.748 01:30:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:20.748 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.748 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 request: 00:22:20.748 { 00:22:20.748 "name": "NVMe0", 00:22:20.748 "trtype": "tcp", 00:22:20.748 "traddr": "10.0.0.2", 00:22:20.748 "hostaddr": "10.0.0.2", 00:22:20.748 "hostsvcid": "60000", 00:22:20.748 "adrfam": "ipv4", 00:22:20.748 "trsvcid": "4420", 00:22:20.748 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:20.748 "method": "bdev_nvme_attach_controller", 00:22:20.748 "req_id": 1 00:22:20.748 } 00:22:20.748 Got JSON-RPC error response 00:22:20.748 response: 00:22:20.748 { 00:22:20.748 "code": -114, 00:22:20.748 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:20.748 } 00:22:20.748 01:30:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:20.748 01:30:12 -- common/autotest_common.sh@643 -- # es=1 00:22:20.748 01:30:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:20.748 01:30:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:20.748 01:30:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:20.748 01:30:12 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:20.748 01:30:12 -- common/autotest_common.sh@640 -- # local es=0 00:22:20.748 01:30:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:20.748 01:30:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:20.748 01:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.748 01:30:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:20.748 01:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.749 01:30:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:20.749 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.749 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.749 request: 00:22:20.749 { 00:22:20.749 "name": "NVMe0", 00:22:20.749 "trtype": "tcp", 00:22:20.749 "traddr": "10.0.0.2", 00:22:20.749 "hostaddr": "10.0.0.2", 00:22:20.749 "hostsvcid": "60000", 00:22:20.749 "adrfam": "ipv4", 00:22:20.749 "trsvcid": "4420", 00:22:20.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.749 "multipath": "disable", 00:22:20.749 "method": "bdev_nvme_attach_controller", 00:22:20.749 "req_id": 1 00:22:20.749 } 00:22:20.749 Got JSON-RPC error response 00:22:20.749 response: 00:22:20.749 { 00:22:20.749 "code": -114, 00:22:20.749 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:20.749 } 00:22:20.749 01:30:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:20.749 01:30:12 -- common/autotest_common.sh@643 -- # es=1 00:22:20.749 01:30:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:20.749 01:30:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:20.749 01:30:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:20.749 01:30:12 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:20.749 01:30:12 -- common/autotest_common.sh@640 -- # local es=0 00:22:20.749 01:30:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:20.749 01:30:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:20.749 01:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.749 01:30:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:20.749 01:30:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.749 01:30:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:20.749 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.749 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.749 request: 00:22:20.749 { 00:22:20.749 "name": "NVMe0", 00:22:20.749 "trtype": "tcp", 00:22:20.749 "traddr": "10.0.0.2", 00:22:20.749 "hostaddr": "10.0.0.2", 00:22:20.749 "hostsvcid": "60000", 00:22:20.749 "adrfam": "ipv4", 00:22:20.749 "trsvcid": "4420", 00:22:20.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.749 "multipath": "failover", 00:22:20.749 "method": "bdev_nvme_attach_controller", 00:22:20.749 "req_id": 1 00:22:20.749 } 00:22:20.749 Got JSON-RPC error response 00:22:20.749 response: 00:22:20.749 { 00:22:20.749 "code": -114, 00:22:20.749 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:20.749 } 00:22:20.749 01:30:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:20.749 01:30:12 -- common/autotest_common.sh@643 -- # es=1 00:22:20.749 01:30:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:20.749 01:30:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:20.749 01:30:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:20.749 01:30:12 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.749 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.749 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:21.008 00:22:21.008 01:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.008 01:30:12 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:21.008 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.008 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:21.008 01:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.008 01:30:12 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:21.008 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.008 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:21.008 00:22:21.008 01:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.008 01:30:12 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:21.008 01:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.008 01:30:12 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:21.008 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:22:21.008 01:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.008 01:30:12 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:21.008 01:30:12 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:22.384 0 00:22:22.384 01:30:13 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:22.384 01:30:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.384 01:30:13 -- common/autotest_common.sh@10 -- # set +x 00:22:22.385 01:30:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.385 01:30:13 -- host/multicontroller.sh@100 -- # killprocess 702283 00:22:22.385 01:30:13 -- common/autotest_common.sh@926 -- # '[' -z 702283 ']' 00:22:22.385 01:30:13 -- common/autotest_common.sh@930 -- # kill -0 702283 00:22:22.385 01:30:13 -- common/autotest_common.sh@931 -- # uname 00:22:22.385 01:30:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:22.385 01:30:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 702283 00:22:22.385 01:30:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:22.385 01:30:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:22.385 01:30:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 702283' 00:22:22.385 killing process with pid 702283 00:22:22.385 01:30:13 -- common/autotest_common.sh@945 -- # kill 702283 00:22:22.385 01:30:13 -- common/autotest_common.sh@950 -- # wait 702283 00:22:22.385 01:30:14 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.385 01:30:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.385 01:30:14 -- common/autotest_common.sh@10 -- # set +x 00:22:22.385 01:30:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.385 01:30:14 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:22.385 01:30:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.385 01:30:14 -- common/autotest_common.sh@10 -- # set +x 00:22:22.385 01:30:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.385 01:30:14 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:22.385 01:30:14 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:22.385 01:30:14 -- common/autotest_common.sh@1597 -- # read -r file 00:22:22.385 01:30:14 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:22.642 01:30:14 -- common/autotest_common.sh@1596 -- # sort -u 00:22:22.642 01:30:14 -- common/autotest_common.sh@1598 -- # cat 00:22:22.642 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:22.642 [2024-07-27 01:30:11.159176] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:22.642 [2024-07-27 01:30:11.159275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702283 ] 00:22:22.642 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.642 [2024-07-27 01:30:11.219605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.642 [2024-07-27 01:30:11.325448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.642 [2024-07-27 01:30:12.665085] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 93be249f-92d6-4ef2-b9ee-6edad2c0f68d already exists 00:22:22.642 [2024-07-27 01:30:12.665147] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:93be249f-92d6-4ef2-b9ee-6edad2c0f68d alias for bdev NVMe1n1 00:22:22.642 [2024-07-27 01:30:12.665168] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:22.642 Running I/O for 1 seconds... 00:22:22.642 00:22:22.642 Latency(us) 00:22:22.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.642 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:22.642 NVMe0n1 : 1.00 19897.64 77.73 0.00 0.00 6416.86 3980.71 11505.21 00:22:22.642 =================================================================================================================== 00:22:22.642 Total : 19897.64 77.73 0.00 0.00 6416.86 3980.71 11505.21 00:22:22.642 Received shutdown signal, test time was about 1.000000 seconds 00:22:22.642 00:22:22.642 Latency(us) 00:22:22.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.642 =================================================================================================================== 00:22:22.642 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.642 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:22.642 01:30:14 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:22.642 01:30:14 -- common/autotest_common.sh@1597 -- # read -r file 00:22:22.642 01:30:14 -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:22.642 01:30:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:22.642 01:30:14 -- nvmf/common.sh@116 -- # sync 00:22:22.642 01:30:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:22.642 01:30:14 -- nvmf/common.sh@119 -- # set +e 00:22:22.642 01:30:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:22.642 01:30:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:22.642 rmmod nvme_tcp 00:22:22.642 rmmod nvme_fabrics 00:22:22.642 rmmod nvme_keyring 00:22:22.642 01:30:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:22.642 01:30:14 -- nvmf/common.sh@123 -- # set -e 00:22:22.642 01:30:14 -- nvmf/common.sh@124 -- # return 0 00:22:22.642 01:30:14 -- nvmf/common.sh@477 -- # '[' -n 702124 ']' 00:22:22.642 01:30:14 -- nvmf/common.sh@478 -- # killprocess 702124 00:22:22.642 01:30:14 -- common/autotest_common.sh@926 -- # '[' -z 702124 ']' 00:22:22.642 01:30:14 -- common/autotest_common.sh@930 -- # kill -0 702124 00:22:22.642 01:30:14 -- common/autotest_common.sh@931 -- # uname 00:22:22.642 01:30:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:22.642 01:30:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 702124 00:22:22.642 01:30:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:22.642 01:30:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:22.642 01:30:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 702124' 00:22:22.642 killing process with pid 702124 00:22:22.642 01:30:14 -- common/autotest_common.sh@945 -- # kill 702124 00:22:22.642 01:30:14 -- common/autotest_common.sh@950 -- # wait 702124 00:22:22.899 01:30:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:22.899 01:30:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:22.899 01:30:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:22.899 01:30:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.899 01:30:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:22.899 01:30:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.899 01:30:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.899 01:30:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.430 01:30:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:25.430 00:22:25.430 real 0m8.803s 00:22:25.430 user 0m16.989s 00:22:25.430 sys 0m2.322s 00:22:25.430 01:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.430 01:30:16 -- common/autotest_common.sh@10 -- # set +x 00:22:25.430 ************************************ 00:22:25.430 END TEST nvmf_multicontroller 00:22:25.430 ************************************ 00:22:25.430 01:30:16 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:25.430 01:30:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:25.430 01:30:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:25.430 01:30:16 -- common/autotest_common.sh@10 -- # set +x 00:22:25.430 ************************************ 00:22:25.430 START TEST nvmf_aer 00:22:25.430 ************************************ 00:22:25.431 01:30:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:25.431 * Looking for test storage... 00:22:25.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:25.431 01:30:16 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.431 01:30:16 -- nvmf/common.sh@7 -- # uname -s 00:22:25.431 01:30:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.431 01:30:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.431 01:30:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.431 01:30:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.431 01:30:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.431 01:30:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.431 01:30:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.431 01:30:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.431 01:30:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.431 01:30:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.431 01:30:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.431 01:30:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.431 01:30:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.431 01:30:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.431 01:30:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.431 01:30:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.431 01:30:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.431 01:30:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.431 01:30:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.431 01:30:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.431 01:30:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.431 01:30:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.431 01:30:16 -- paths/export.sh@5 -- # export PATH 00:22:25.431 01:30:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.431 01:30:16 -- nvmf/common.sh@46 -- # : 0 00:22:25.431 01:30:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:25.431 01:30:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:25.431 01:30:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:25.431 01:30:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.431 01:30:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.431 01:30:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:25.431 01:30:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:25.431 01:30:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:25.431 01:30:16 -- host/aer.sh@11 -- # nvmftestinit 00:22:25.431 01:30:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:25.431 01:30:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.431 01:30:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:25.431 01:30:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:25.431 01:30:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:25.431 01:30:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.431 01:30:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.431 01:30:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.431 01:30:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:25.431 01:30:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:25.431 01:30:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:25.431 01:30:16 -- common/autotest_common.sh@10 -- # set +x 00:22:26.808 01:30:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:26.808 01:30:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:26.808 01:30:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:26.808 01:30:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:26.808 01:30:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:26.808 01:30:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:26.808 01:30:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:26.808 01:30:18 -- nvmf/common.sh@294 -- # net_devs=() 00:22:26.808 01:30:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:26.808 01:30:18 -- nvmf/common.sh@295 -- # e810=() 00:22:26.808 01:30:18 -- nvmf/common.sh@295 -- # local -ga e810 00:22:26.808 01:30:18 -- nvmf/common.sh@296 -- # x722=() 00:22:26.808 01:30:18 -- nvmf/common.sh@296 -- # local -ga x722 00:22:26.808 01:30:18 -- nvmf/common.sh@297 -- # mlx=() 00:22:26.808 01:30:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:26.808 01:30:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.808 01:30:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:26.808 01:30:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:26.808 01:30:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:26.808 01:30:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:26.808 01:30:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:26.808 01:30:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:26.809 01:30:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:26.809 01:30:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:26.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:26.809 01:30:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:26.809 01:30:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:26.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:26.809 01:30:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:26.809 01:30:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:26.809 01:30:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.809 01:30:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:26.809 01:30:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.809 01:30:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:26.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:26.809 01:30:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.809 01:30:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:26.809 01:30:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.809 01:30:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:26.809 01:30:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.809 01:30:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:26.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:26.809 01:30:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.809 01:30:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:26.809 01:30:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:26.809 01:30:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:26.809 01:30:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:26.809 01:30:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.809 01:30:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.809 01:30:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.809 01:30:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:26.809 01:30:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.809 01:30:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.809 01:30:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:26.809 01:30:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.809 01:30:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.809 01:30:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:26.809 01:30:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:26.809 01:30:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.809 01:30:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.068 01:30:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.068 01:30:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.068 01:30:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:27.068 01:30:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.068 01:30:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.068 01:30:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.068 01:30:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:27.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:22:27.068 00:22:27.068 --- 10.0.0.2 ping statistics --- 00:22:27.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.068 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:27.068 01:30:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:22:27.068 00:22:27.068 --- 10.0.0.1 ping statistics --- 00:22:27.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.068 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:22:27.068 01:30:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.068 01:30:18 -- nvmf/common.sh@410 -- # return 0 00:22:27.068 01:30:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:27.068 01:30:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.068 01:30:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:27.068 01:30:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:27.068 01:30:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.068 01:30:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:27.068 01:30:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:27.068 01:30:18 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:27.068 01:30:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:27.068 01:30:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:27.068 01:30:18 -- common/autotest_common.sh@10 -- # set +x 00:22:27.068 01:30:18 -- nvmf/common.sh@469 -- # nvmfpid=704641 00:22:27.068 01:30:18 -- nvmf/common.sh@470 -- # waitforlisten 704641 00:22:27.068 01:30:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:27.068 01:30:18 -- common/autotest_common.sh@819 -- # '[' -z 704641 ']' 00:22:27.068 01:30:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.068 01:30:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:27.068 01:30:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.068 01:30:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:27.068 01:30:18 -- common/autotest_common.sh@10 -- # set +x 00:22:27.068 [2024-07-27 01:30:18.722985] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:27.068 [2024-07-27 01:30:18.723083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.068 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.068 [2024-07-27 01:30:18.792367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.328 [2024-07-27 01:30:18.910360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:27.328 [2024-07-27 01:30:18.910516] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.328 [2024-07-27 01:30:18.910539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.328 [2024-07-27 01:30:18.910553] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.328 [2024-07-27 01:30:18.910613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.328 [2024-07-27 01:30:18.910664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.328 [2024-07-27 01:30:18.910720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.328 [2024-07-27 01:30:18.910723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.264 01:30:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:28.264 01:30:19 -- common/autotest_common.sh@852 -- # return 0 00:22:28.264 01:30:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:28.264 01:30:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:28.264 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 01:30:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.264 01:30:19 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.264 01:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.264 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 [2024-07-27 01:30:19.683500] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.264 01:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.264 01:30:19 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:28.264 01:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.264 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 Malloc0 00:22:28.264 01:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.264 01:30:19 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:28.264 01:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.264 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 01:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.264 01:30:19 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:28.264 01:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.264 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 01:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.264 01:30:19 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.264 01:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.264 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 [2024-07-27 01:30:19.734522] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.264 01:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.264 01:30:19 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:28.264 01:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.264 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 [2024-07-27 01:30:19.742261] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:28.264 [ 00:22:28.264 { 00:22:28.264 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:28.264 "subtype": "Discovery", 00:22:28.264 "listen_addresses": [], 00:22:28.264 "allow_any_host": true, 00:22:28.264 "hosts": [] 00:22:28.264 }, 00:22:28.264 { 00:22:28.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.264 "subtype": "NVMe", 00:22:28.264 "listen_addresses": [ 00:22:28.264 { 00:22:28.264 "transport": "TCP", 00:22:28.264 "trtype": "TCP", 00:22:28.264 "adrfam": "IPv4", 00:22:28.264 "traddr": "10.0.0.2", 00:22:28.264 "trsvcid": "4420" 00:22:28.264 } 00:22:28.264 ], 00:22:28.264 "allow_any_host": true, 00:22:28.264 "hosts": [], 00:22:28.264 "serial_number": "SPDK00000000000001", 00:22:28.264 "model_number": "SPDK bdev Controller", 00:22:28.264 "max_namespaces": 2, 00:22:28.264 "min_cntlid": 1, 00:22:28.264 "max_cntlid": 65519, 00:22:28.264 "namespaces": [ 00:22:28.264 { 00:22:28.264 "nsid": 1, 00:22:28.264 "bdev_name": "Malloc0", 00:22:28.264 "name": "Malloc0", 00:22:28.264 "nguid": "D6468B715B83419BBEFD560B38BCD49D", 00:22:28.264 "uuid": "d6468b71-5b83-419b-befd-560b38bcd49d" 00:22:28.264 } 00:22:28.264 ] 00:22:28.264 } 00:22:28.264 ] 00:22:28.264 01:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.264 01:30:19 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:28.264 01:30:19 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:28.264 01:30:19 -- host/aer.sh@33 -- # aerpid=704800 00:22:28.264 01:30:19 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:28.264 01:30:19 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:28.264 01:30:19 -- common/autotest_common.sh@1244 -- # local i=0 00:22:28.264 01:30:19 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:28.264 01:30:19 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:22:28.264 01:30:19 -- common/autotest_common.sh@1247 -- # i=1 00:22:28.264 01:30:19 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:28.264 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.264 01:30:19 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:28.264 01:30:19 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:22:28.264 01:30:19 -- common/autotest_common.sh@1247 -- # i=2 00:22:28.264 01:30:19 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:28.264 01:30:19 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:28.264 01:30:19 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:28.264 01:30:19 -- common/autotest_common.sh@1255 -- # return 0 00:22:28.264 01:30:19 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:28.264 01:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.264 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 Malloc1 00:22:28.264 01:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.264 01:30:19 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:28.264 01:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.264 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 01:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.264 01:30:20 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:28.264 01:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.264 01:30:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.264 [ 00:22:28.264 { 00:22:28.525 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:28.525 "subtype": "Discovery", 00:22:28.525 "listen_addresses": [], 00:22:28.525 "allow_any_host": true, 00:22:28.525 "hosts": [] 00:22:28.525 }, 00:22:28.525 { 00:22:28.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.525 "subtype": "NVMe", 00:22:28.525 "listen_addresses": [ 00:22:28.525 { 00:22:28.525 "transport": "TCP", 00:22:28.525 "trtype": "TCP", 00:22:28.525 "adrfam": "IPv4", 00:22:28.525 "traddr": "10.0.0.2", 00:22:28.525 "trsvcid": "4420" 00:22:28.525 } 00:22:28.525 ], 00:22:28.525 "allow_any_host": true, 00:22:28.525 "hosts": [], 00:22:28.525 "serial_number": "SPDK00000000000001", 00:22:28.525 "model_number": "SPDK bdev Controller", 00:22:28.525 "max_namespaces": 2, 00:22:28.525 "min_cntlid": 1, 00:22:28.525 "max_cntlid": 65519, 00:22:28.525 "namespaces": [ 00:22:28.525 { 00:22:28.525 "nsid": 1, 00:22:28.525 "bdev_name": "Malloc0", 00:22:28.525 "name": "Malloc0", 00:22:28.525 "nguid": "D6468B715B83419BBEFD560B38BCD49D", 00:22:28.525 "uuid": "d6468b71-5b83-419b-befd-560b38bcd49d" 00:22:28.525 }, 00:22:28.525 { 00:22:28.525 "nsid": 2, 00:22:28.525 "bdev_name": "Malloc1", 00:22:28.525 "name": "Malloc1", 00:22:28.525 "nguid": "BF7F129E8C49424A9D865310D76731D8", 00:22:28.525 "uuid": "bf7f129e-8c49-424a-9d86-5310d76731d8" 00:22:28.525 } 00:22:28.525 ] 00:22:28.525 } 00:22:28.525 ] 00:22:28.525 01:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.525 01:30:20 -- host/aer.sh@43 -- # wait 704800 00:22:28.525 Asynchronous Event Request test 00:22:28.525 Attaching to 10.0.0.2 00:22:28.525 Attached to 10.0.0.2 00:22:28.525 Registering asynchronous event callbacks... 00:22:28.525 Starting namespace attribute notice tests for all controllers... 00:22:28.525 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:28.525 aer_cb - Changed Namespace 00:22:28.525 Cleaning up... 00:22:28.525 01:30:20 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:28.525 01:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.525 01:30:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.525 01:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.525 01:30:20 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:28.525 01:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.525 01:30:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.525 01:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.525 01:30:20 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:28.525 01:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.525 01:30:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.525 01:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.525 01:30:20 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:28.525 01:30:20 -- host/aer.sh@51 -- # nvmftestfini 00:22:28.525 01:30:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:28.525 01:30:20 -- nvmf/common.sh@116 -- # sync 00:22:28.525 01:30:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:28.525 01:30:20 -- nvmf/common.sh@119 -- # set +e 00:22:28.525 01:30:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:28.525 01:30:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:28.525 rmmod nvme_tcp 00:22:28.525 rmmod nvme_fabrics 00:22:28.525 rmmod nvme_keyring 00:22:28.525 01:30:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:28.525 01:30:20 -- nvmf/common.sh@123 -- # set -e 00:22:28.525 01:30:20 -- nvmf/common.sh@124 -- # return 0 00:22:28.525 01:30:20 -- nvmf/common.sh@477 -- # '[' -n 704641 ']' 00:22:28.525 01:30:20 -- nvmf/common.sh@478 -- # killprocess 704641 00:22:28.525 01:30:20 -- common/autotest_common.sh@926 -- # '[' -z 704641 ']' 00:22:28.525 01:30:20 -- common/autotest_common.sh@930 -- # kill -0 704641 00:22:28.525 01:30:20 -- common/autotest_common.sh@931 -- # uname 00:22:28.525 01:30:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:28.525 01:30:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 704641 00:22:28.525 01:30:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:28.525 01:30:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:28.525 01:30:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 704641' 00:22:28.525 killing process with pid 704641 00:22:28.525 01:30:20 -- common/autotest_common.sh@945 -- # kill 704641 00:22:28.525 [2024-07-27 01:30:20.192150] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:28.525 01:30:20 -- common/autotest_common.sh@950 -- # wait 704641 00:22:28.784 01:30:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:28.784 01:30:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:28.784 01:30:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:28.784 01:30:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.784 01:30:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:28.784 01:30:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.784 01:30:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.784 01:30:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.356 01:30:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:31.356 00:22:31.356 real 0m5.917s 00:22:31.356 user 0m6.921s 00:22:31.356 sys 0m1.792s 00:22:31.356 01:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.356 01:30:22 -- common/autotest_common.sh@10 -- # set +x 00:22:31.356 ************************************ 00:22:31.356 END TEST nvmf_aer 00:22:31.356 ************************************ 00:22:31.356 01:30:22 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:31.356 01:30:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:31.356 01:30:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:31.356 01:30:22 -- common/autotest_common.sh@10 -- # set +x 00:22:31.356 ************************************ 00:22:31.356 START TEST nvmf_async_init 00:22:31.356 ************************************ 00:22:31.356 01:30:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:31.356 * Looking for test storage... 00:22:31.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.356 01:30:22 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.356 01:30:22 -- nvmf/common.sh@7 -- # uname -s 00:22:31.356 01:30:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.356 01:30:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.356 01:30:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.356 01:30:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.356 01:30:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.356 01:30:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.356 01:30:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.356 01:30:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.356 01:30:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.356 01:30:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.356 01:30:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.356 01:30:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.356 01:30:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.356 01:30:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.356 01:30:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.356 01:30:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.356 01:30:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.356 01:30:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.356 01:30:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.356 01:30:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.356 01:30:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.356 01:30:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.356 01:30:22 -- paths/export.sh@5 -- # export PATH 00:22:31.356 01:30:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.356 01:30:22 -- nvmf/common.sh@46 -- # : 0 00:22:31.356 01:30:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:31.356 01:30:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:31.356 01:30:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:31.356 01:30:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.356 01:30:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.356 01:30:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:31.356 01:30:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:31.356 01:30:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:31.356 01:30:22 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:31.356 01:30:22 -- host/async_init.sh@14 -- # null_block_size=512 00:22:31.356 01:30:22 -- host/async_init.sh@15 -- # null_bdev=null0 00:22:31.356 01:30:22 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:31.356 01:30:22 -- host/async_init.sh@20 -- # uuidgen 00:22:31.356 01:30:22 -- host/async_init.sh@20 -- # tr -d - 00:22:31.356 01:30:22 -- host/async_init.sh@20 -- # nguid=07ba1460a44c402a9e1dd537612d9fc1 00:22:31.356 01:30:22 -- host/async_init.sh@22 -- # nvmftestinit 00:22:31.356 01:30:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:31.356 01:30:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.356 01:30:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:31.356 01:30:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:31.356 01:30:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:31.356 01:30:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.356 01:30:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.356 01:30:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.356 01:30:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:31.356 01:30:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:31.356 01:30:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:31.356 01:30:22 -- common/autotest_common.sh@10 -- # set +x 00:22:33.267 01:30:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:33.267 01:30:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:33.267 01:30:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:33.267 01:30:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:33.267 01:30:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:33.267 01:30:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:33.267 01:30:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:33.267 01:30:24 -- nvmf/common.sh@294 -- # net_devs=() 00:22:33.267 01:30:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:33.267 01:30:24 -- nvmf/common.sh@295 -- # e810=() 00:22:33.267 01:30:24 -- nvmf/common.sh@295 -- # local -ga e810 00:22:33.267 01:30:24 -- nvmf/common.sh@296 -- # x722=() 00:22:33.267 01:30:24 -- nvmf/common.sh@296 -- # local -ga x722 00:22:33.267 01:30:24 -- nvmf/common.sh@297 -- # mlx=() 00:22:33.267 01:30:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:33.267 01:30:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.267 01:30:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:33.267 01:30:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:33.267 01:30:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:33.267 01:30:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:33.267 01:30:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:33.267 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:33.267 01:30:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:33.267 01:30:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:33.267 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:33.267 01:30:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:33.267 01:30:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:33.267 01:30:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.267 01:30:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:33.267 01:30:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.267 01:30:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:33.267 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:33.267 01:30:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.267 01:30:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:33.267 01:30:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.267 01:30:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:33.267 01:30:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.267 01:30:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:33.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:33.267 01:30:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.267 01:30:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:33.267 01:30:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:33.267 01:30:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:33.267 01:30:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.267 01:30:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.267 01:30:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.267 01:30:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:33.267 01:30:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.267 01:30:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.267 01:30:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:33.267 01:30:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.267 01:30:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.267 01:30:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:33.267 01:30:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:33.267 01:30:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.267 01:30:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.267 01:30:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.267 01:30:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.267 01:30:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:33.267 01:30:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.267 01:30:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.267 01:30:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.267 01:30:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:33.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:22:33.267 00:22:33.267 --- 10.0.0.2 ping statistics --- 00:22:33.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.267 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:33.267 01:30:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:22:33.267 00:22:33.267 --- 10.0.0.1 ping statistics --- 00:22:33.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.267 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:33.267 01:30:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.267 01:30:24 -- nvmf/common.sh@410 -- # return 0 00:22:33.267 01:30:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:33.267 01:30:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.267 01:30:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:33.267 01:30:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.267 01:30:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:33.267 01:30:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:33.267 01:30:24 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:33.267 01:30:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:33.267 01:30:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:33.267 01:30:24 -- common/autotest_common.sh@10 -- # set +x 00:22:33.267 01:30:24 -- nvmf/common.sh@469 -- # nvmfpid=706756 00:22:33.267 01:30:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:33.267 01:30:24 -- nvmf/common.sh@470 -- # waitforlisten 706756 00:22:33.267 01:30:24 -- common/autotest_common.sh@819 -- # '[' -z 706756 ']' 00:22:33.267 01:30:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.267 01:30:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:33.267 01:30:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.268 01:30:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:33.268 01:30:24 -- common/autotest_common.sh@10 -- # set +x 00:22:33.268 [2024-07-27 01:30:24.731954] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:33.268 [2024-07-27 01:30:24.732036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.268 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.268 [2024-07-27 01:30:24.794962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.268 [2024-07-27 01:30:24.898918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:33.268 [2024-07-27 01:30:24.899101] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.268 [2024-07-27 01:30:24.899121] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.268 [2024-07-27 01:30:24.899134] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.268 [2024-07-27 01:30:24.899162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.201 01:30:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:34.201 01:30:25 -- common/autotest_common.sh@852 -- # return 0 00:22:34.201 01:30:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:34.201 01:30:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:34.201 01:30:25 -- common/autotest_common.sh@10 -- # set +x 00:22:34.201 01:30:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.201 01:30:25 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:34.201 01:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.201 01:30:25 -- common/autotest_common.sh@10 -- # set +x 00:22:34.201 [2024-07-27 01:30:25.742072] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.201 01:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.201 01:30:25 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:34.201 01:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.201 01:30:25 -- common/autotest_common.sh@10 -- # set +x 00:22:34.201 null0 00:22:34.201 01:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.201 01:30:25 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:34.201 01:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.201 01:30:25 -- common/autotest_common.sh@10 -- # set +x 00:22:34.201 01:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.201 01:30:25 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:34.201 01:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.201 01:30:25 -- common/autotest_common.sh@10 -- # set +x 00:22:34.201 01:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.201 01:30:25 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 07ba1460a44c402a9e1dd537612d9fc1 00:22:34.201 01:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.201 01:30:25 -- common/autotest_common.sh@10 -- # set +x 00:22:34.201 01:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.201 01:30:25 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:34.201 01:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.201 01:30:25 -- common/autotest_common.sh@10 -- # set +x 00:22:34.201 [2024-07-27 01:30:25.782327] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.201 01:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.201 01:30:25 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:34.201 01:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.201 01:30:25 -- common/autotest_common.sh@10 -- # set +x 00:22:34.460 nvme0n1 00:22:34.460 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.460 01:30:26 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:34.460 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.460 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.460 [ 00:22:34.460 { 00:22:34.460 "name": "nvme0n1", 00:22:34.460 "aliases": [ 00:22:34.460 "07ba1460-a44c-402a-9e1d-d537612d9fc1" 00:22:34.460 ], 00:22:34.460 "product_name": "NVMe disk", 00:22:34.460 "block_size": 512, 00:22:34.460 "num_blocks": 2097152, 00:22:34.460 "uuid": "07ba1460-a44c-402a-9e1d-d537612d9fc1", 00:22:34.460 "assigned_rate_limits": { 00:22:34.460 "rw_ios_per_sec": 0, 00:22:34.460 "rw_mbytes_per_sec": 0, 00:22:34.460 "r_mbytes_per_sec": 0, 00:22:34.460 "w_mbytes_per_sec": 0 00:22:34.460 }, 00:22:34.460 "claimed": false, 00:22:34.460 "zoned": false, 00:22:34.460 "supported_io_types": { 00:22:34.460 "read": true, 00:22:34.460 "write": true, 00:22:34.460 "unmap": false, 00:22:34.460 "write_zeroes": true, 00:22:34.460 "flush": true, 00:22:34.460 "reset": true, 00:22:34.460 "compare": true, 00:22:34.460 "compare_and_write": true, 00:22:34.460 "abort": true, 00:22:34.460 "nvme_admin": true, 00:22:34.460 "nvme_io": true 00:22:34.460 }, 00:22:34.460 "driver_specific": { 00:22:34.460 "nvme": [ 00:22:34.460 { 00:22:34.460 "trid": { 00:22:34.460 "trtype": "TCP", 00:22:34.460 "adrfam": "IPv4", 00:22:34.460 "traddr": "10.0.0.2", 00:22:34.460 "trsvcid": "4420", 00:22:34.460 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:34.460 }, 00:22:34.460 "ctrlr_data": { 00:22:34.460 "cntlid": 1, 00:22:34.460 "vendor_id": "0x8086", 00:22:34.460 "model_number": "SPDK bdev Controller", 00:22:34.460 "serial_number": "00000000000000000000", 00:22:34.460 "firmware_revision": "24.01.1", 00:22:34.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.460 "oacs": { 00:22:34.460 "security": 0, 00:22:34.460 "format": 0, 00:22:34.460 "firmware": 0, 00:22:34.460 "ns_manage": 0 00:22:34.460 }, 00:22:34.460 "multi_ctrlr": true, 00:22:34.460 "ana_reporting": false 00:22:34.460 }, 00:22:34.460 "vs": { 00:22:34.460 "nvme_version": "1.3" 00:22:34.460 }, 00:22:34.460 "ns_data": { 00:22:34.460 "id": 1, 00:22:34.460 "can_share": true 00:22:34.460 } 00:22:34.460 } 00:22:34.460 ], 00:22:34.460 "mp_policy": "active_passive" 00:22:34.460 } 00:22:34.460 } 00:22:34.460 ] 00:22:34.460 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.460 01:30:26 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:34.460 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.460 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.460 [2024-07-27 01:30:26.031027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:34.460 [2024-07-27 01:30:26.031128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d7a80 (9): Bad file descriptor 00:22:34.460 [2024-07-27 01:30:26.163218] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:34.460 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.460 01:30:26 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:34.460 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.460 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.460 [ 00:22:34.460 { 00:22:34.460 "name": "nvme0n1", 00:22:34.460 "aliases": [ 00:22:34.460 "07ba1460-a44c-402a-9e1d-d537612d9fc1" 00:22:34.460 ], 00:22:34.460 "product_name": "NVMe disk", 00:22:34.460 "block_size": 512, 00:22:34.460 "num_blocks": 2097152, 00:22:34.460 "uuid": "07ba1460-a44c-402a-9e1d-d537612d9fc1", 00:22:34.460 "assigned_rate_limits": { 00:22:34.460 "rw_ios_per_sec": 0, 00:22:34.460 "rw_mbytes_per_sec": 0, 00:22:34.460 "r_mbytes_per_sec": 0, 00:22:34.460 "w_mbytes_per_sec": 0 00:22:34.460 }, 00:22:34.460 "claimed": false, 00:22:34.460 "zoned": false, 00:22:34.460 "supported_io_types": { 00:22:34.460 "read": true, 00:22:34.460 "write": true, 00:22:34.460 "unmap": false, 00:22:34.460 "write_zeroes": true, 00:22:34.460 "flush": true, 00:22:34.460 "reset": true, 00:22:34.460 "compare": true, 00:22:34.460 "compare_and_write": true, 00:22:34.460 "abort": true, 00:22:34.460 "nvme_admin": true, 00:22:34.460 "nvme_io": true 00:22:34.460 }, 00:22:34.460 "driver_specific": { 00:22:34.460 "nvme": [ 00:22:34.460 { 00:22:34.460 "trid": { 00:22:34.460 "trtype": "TCP", 00:22:34.460 "adrfam": "IPv4", 00:22:34.460 "traddr": "10.0.0.2", 00:22:34.460 "trsvcid": "4420", 00:22:34.460 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:34.460 }, 00:22:34.460 "ctrlr_data": { 00:22:34.460 "cntlid": 2, 00:22:34.460 "vendor_id": "0x8086", 00:22:34.460 "model_number": "SPDK bdev Controller", 00:22:34.460 "serial_number": "00000000000000000000", 00:22:34.460 "firmware_revision": "24.01.1", 00:22:34.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.460 "oacs": { 00:22:34.460 "security": 0, 00:22:34.460 "format": 0, 00:22:34.460 "firmware": 0, 00:22:34.460 "ns_manage": 0 00:22:34.460 }, 00:22:34.460 "multi_ctrlr": true, 00:22:34.460 "ana_reporting": false 00:22:34.460 }, 00:22:34.460 "vs": { 00:22:34.460 "nvme_version": "1.3" 00:22:34.460 }, 00:22:34.460 "ns_data": { 00:22:34.460 "id": 1, 00:22:34.460 "can_share": true 00:22:34.460 } 00:22:34.460 } 00:22:34.460 ], 00:22:34.460 "mp_policy": "active_passive" 00:22:34.460 } 00:22:34.460 } 00:22:34.460 ] 00:22:34.460 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.460 01:30:26 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.460 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.460 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.460 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.460 01:30:26 -- host/async_init.sh@53 -- # mktemp 00:22:34.460 01:30:26 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.M1Wq2AZZUo 00:22:34.460 01:30:26 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:34.460 01:30:26 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.M1Wq2AZZUo 00:22:34.460 01:30:26 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:34.460 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.460 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.460 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.460 01:30:26 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:34.460 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.460 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.460 [2024-07-27 01:30:26.207659] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:34.460 [2024-07-27 01:30:26.207789] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:34.460 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.460 01:30:26 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M1Wq2AZZUo 00:22:34.460 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.460 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.718 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.718 01:30:26 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M1Wq2AZZUo 00:22:34.718 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.718 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.718 [2024-07-27 01:30:26.223698] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.718 nvme0n1 00:22:34.718 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.718 01:30:26 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:34.718 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.718 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.718 [ 00:22:34.718 { 00:22:34.718 "name": "nvme0n1", 00:22:34.718 "aliases": [ 00:22:34.718 "07ba1460-a44c-402a-9e1d-d537612d9fc1" 00:22:34.718 ], 00:22:34.718 "product_name": "NVMe disk", 00:22:34.718 "block_size": 512, 00:22:34.718 "num_blocks": 2097152, 00:22:34.718 "uuid": "07ba1460-a44c-402a-9e1d-d537612d9fc1", 00:22:34.718 "assigned_rate_limits": { 00:22:34.718 "rw_ios_per_sec": 0, 00:22:34.718 "rw_mbytes_per_sec": 0, 00:22:34.718 "r_mbytes_per_sec": 0, 00:22:34.718 "w_mbytes_per_sec": 0 00:22:34.718 }, 00:22:34.718 "claimed": false, 00:22:34.718 "zoned": false, 00:22:34.718 "supported_io_types": { 00:22:34.718 "read": true, 00:22:34.718 "write": true, 00:22:34.718 "unmap": false, 00:22:34.718 "write_zeroes": true, 00:22:34.718 "flush": true, 00:22:34.718 "reset": true, 00:22:34.718 "compare": true, 00:22:34.718 "compare_and_write": true, 00:22:34.718 "abort": true, 00:22:34.718 "nvme_admin": true, 00:22:34.718 "nvme_io": true 00:22:34.718 }, 00:22:34.718 "driver_specific": { 00:22:34.718 "nvme": [ 00:22:34.718 { 00:22:34.718 "trid": { 00:22:34.718 "trtype": "TCP", 00:22:34.718 "adrfam": "IPv4", 00:22:34.718 "traddr": "10.0.0.2", 00:22:34.718 "trsvcid": "4421", 00:22:34.718 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:34.718 }, 00:22:34.718 "ctrlr_data": { 00:22:34.718 "cntlid": 3, 00:22:34.718 "vendor_id": "0x8086", 00:22:34.718 "model_number": "SPDK bdev Controller", 00:22:34.718 "serial_number": "00000000000000000000", 00:22:34.718 "firmware_revision": "24.01.1", 00:22:34.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.718 "oacs": { 00:22:34.718 "security": 0, 00:22:34.718 "format": 0, 00:22:34.718 "firmware": 0, 00:22:34.718 "ns_manage": 0 00:22:34.718 }, 00:22:34.718 "multi_ctrlr": true, 00:22:34.718 "ana_reporting": false 00:22:34.718 }, 00:22:34.718 "vs": { 00:22:34.718 "nvme_version": "1.3" 00:22:34.718 }, 00:22:34.718 "ns_data": { 00:22:34.718 "id": 1, 00:22:34.718 "can_share": true 00:22:34.718 } 00:22:34.718 } 00:22:34.718 ], 00:22:34.718 "mp_policy": "active_passive" 00:22:34.718 } 00:22:34.718 } 00:22:34.718 ] 00:22:34.718 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.718 01:30:26 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.718 01:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.718 01:30:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.719 01:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.719 01:30:26 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.M1Wq2AZZUo 00:22:34.719 01:30:26 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:34.719 01:30:26 -- host/async_init.sh@78 -- # nvmftestfini 00:22:34.719 01:30:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:34.719 01:30:26 -- nvmf/common.sh@116 -- # sync 00:22:34.719 01:30:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:34.719 01:30:26 -- nvmf/common.sh@119 -- # set +e 00:22:34.719 01:30:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:34.719 01:30:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:34.719 rmmod nvme_tcp 00:22:34.719 rmmod nvme_fabrics 00:22:34.719 rmmod nvme_keyring 00:22:34.719 01:30:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:34.719 01:30:26 -- nvmf/common.sh@123 -- # set -e 00:22:34.719 01:30:26 -- nvmf/common.sh@124 -- # return 0 00:22:34.719 01:30:26 -- nvmf/common.sh@477 -- # '[' -n 706756 ']' 00:22:34.719 01:30:26 -- nvmf/common.sh@478 -- # killprocess 706756 00:22:34.719 01:30:26 -- common/autotest_common.sh@926 -- # '[' -z 706756 ']' 00:22:34.719 01:30:26 -- common/autotest_common.sh@930 -- # kill -0 706756 00:22:34.719 01:30:26 -- common/autotest_common.sh@931 -- # uname 00:22:34.719 01:30:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:34.719 01:30:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 706756 00:22:34.719 01:30:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:34.719 01:30:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:34.719 01:30:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 706756' 00:22:34.719 killing process with pid 706756 00:22:34.719 01:30:26 -- common/autotest_common.sh@945 -- # kill 706756 00:22:34.719 01:30:26 -- common/autotest_common.sh@950 -- # wait 706756 00:22:34.976 01:30:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:34.976 01:30:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:34.976 01:30:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:34.976 01:30:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.976 01:30:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:34.976 01:30:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.976 01:30:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.976 01:30:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.510 01:30:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:37.510 00:22:37.510 real 0m6.131s 00:22:37.510 user 0m2.989s 00:22:37.510 sys 0m1.770s 00:22:37.510 01:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.510 01:30:28 -- common/autotest_common.sh@10 -- # set +x 00:22:37.510 ************************************ 00:22:37.510 END TEST nvmf_async_init 00:22:37.510 ************************************ 00:22:37.510 01:30:28 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:37.510 01:30:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:37.510 01:30:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:37.510 01:30:28 -- common/autotest_common.sh@10 -- # set +x 00:22:37.510 ************************************ 00:22:37.510 START TEST dma 00:22:37.510 ************************************ 00:22:37.510 01:30:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:37.510 * Looking for test storage... 00:22:37.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.510 01:30:28 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.510 01:30:28 -- nvmf/common.sh@7 -- # uname -s 00:22:37.510 01:30:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.510 01:30:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.510 01:30:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.510 01:30:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.510 01:30:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.510 01:30:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.510 01:30:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.510 01:30:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.510 01:30:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.510 01:30:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.510 01:30:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.510 01:30:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.510 01:30:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.510 01:30:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.510 01:30:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.510 01:30:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.510 01:30:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.510 01:30:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.510 01:30:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.511 01:30:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.511 01:30:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.511 01:30:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.511 01:30:28 -- paths/export.sh@5 -- # export PATH 00:22:37.511 01:30:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.511 01:30:28 -- nvmf/common.sh@46 -- # : 0 00:22:37.511 01:30:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.511 01:30:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.511 01:30:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.511 01:30:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.511 01:30:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.511 01:30:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.511 01:30:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.511 01:30:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.511 01:30:28 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:37.511 01:30:28 -- host/dma.sh@13 -- # exit 0 00:22:37.511 00:22:37.511 real 0m0.067s 00:22:37.511 user 0m0.025s 00:22:37.511 sys 0m0.047s 00:22:37.511 01:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.511 01:30:28 -- common/autotest_common.sh@10 -- # set +x 00:22:37.511 ************************************ 00:22:37.511 END TEST dma 00:22:37.511 ************************************ 00:22:37.511 01:30:28 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:37.511 01:30:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:37.511 01:30:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:37.511 01:30:28 -- common/autotest_common.sh@10 -- # set +x 00:22:37.511 ************************************ 00:22:37.511 START TEST nvmf_identify 00:22:37.511 ************************************ 00:22:37.511 01:30:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:37.511 * Looking for test storage... 00:22:37.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.511 01:30:28 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.511 01:30:28 -- nvmf/common.sh@7 -- # uname -s 00:22:37.511 01:30:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.511 01:30:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.511 01:30:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.511 01:30:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.511 01:30:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.511 01:30:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.511 01:30:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.511 01:30:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.511 01:30:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.511 01:30:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.511 01:30:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.511 01:30:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.511 01:30:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.511 01:30:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.511 01:30:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.511 01:30:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.511 01:30:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.511 01:30:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.511 01:30:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.511 01:30:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.511 01:30:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.511 01:30:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.511 01:30:28 -- paths/export.sh@5 -- # export PATH 00:22:37.511 01:30:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.511 01:30:28 -- nvmf/common.sh@46 -- # : 0 00:22:37.511 01:30:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.511 01:30:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.511 01:30:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.511 01:30:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.511 01:30:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.511 01:30:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.511 01:30:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.511 01:30:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.511 01:30:28 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.511 01:30:28 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.511 01:30:28 -- host/identify.sh@14 -- # nvmftestinit 00:22:37.511 01:30:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:37.511 01:30:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.511 01:30:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:37.511 01:30:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:37.511 01:30:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:37.511 01:30:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.511 01:30:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.511 01:30:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.511 01:30:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:37.511 01:30:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:37.511 01:30:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:37.511 01:30:28 -- common/autotest_common.sh@10 -- # set +x 00:22:39.412 01:30:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:39.412 01:30:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:39.412 01:30:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:39.412 01:30:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:39.412 01:30:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:39.412 01:30:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:39.412 01:30:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:39.412 01:30:30 -- nvmf/common.sh@294 -- # net_devs=() 00:22:39.412 01:30:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:39.412 01:30:30 -- nvmf/common.sh@295 -- # e810=() 00:22:39.412 01:30:30 -- nvmf/common.sh@295 -- # local -ga e810 00:22:39.412 01:30:30 -- nvmf/common.sh@296 -- # x722=() 00:22:39.412 01:30:30 -- nvmf/common.sh@296 -- # local -ga x722 00:22:39.412 01:30:30 -- nvmf/common.sh@297 -- # mlx=() 00:22:39.412 01:30:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:39.412 01:30:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.412 01:30:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:39.412 01:30:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:39.412 01:30:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:39.412 01:30:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:39.412 01:30:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.412 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.412 01:30:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:39.412 01:30:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.412 01:30:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:39.412 01:30:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:39.412 01:30:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.412 01:30:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:39.412 01:30:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.412 01:30:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.412 01:30:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.412 01:30:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:39.412 01:30:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.412 01:30:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:39.412 01:30:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.412 01:30:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.412 01:30:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.412 01:30:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:39.412 01:30:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:39.412 01:30:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:39.412 01:30:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.412 01:30:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.412 01:30:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.412 01:30:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:39.412 01:30:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.412 01:30:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.412 01:30:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:39.412 01:30:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.412 01:30:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.412 01:30:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:39.412 01:30:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:39.412 01:30:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.412 01:30:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.412 01:30:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.412 01:30:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.412 01:30:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:39.412 01:30:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.412 01:30:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.412 01:30:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.412 01:30:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:39.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:22:39.412 00:22:39.412 --- 10.0.0.2 ping statistics --- 00:22:39.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.412 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:22:39.412 01:30:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:22:39.412 00:22:39.412 --- 10.0.0.1 ping statistics --- 00:22:39.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.412 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:22:39.412 01:30:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.412 01:30:30 -- nvmf/common.sh@410 -- # return 0 00:22:39.412 01:30:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:39.412 01:30:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.412 01:30:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:39.412 01:30:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.412 01:30:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:39.412 01:30:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:39.412 01:30:30 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:39.412 01:30:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:39.412 01:30:30 -- common/autotest_common.sh@10 -- # set +x 00:22:39.412 01:30:30 -- host/identify.sh@19 -- # nvmfpid=708903 00:22:39.412 01:30:30 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:39.412 01:30:30 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:39.412 01:30:30 -- host/identify.sh@23 -- # waitforlisten 708903 00:22:39.412 01:30:30 -- common/autotest_common.sh@819 -- # '[' -z 708903 ']' 00:22:39.412 01:30:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.412 01:30:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:39.412 01:30:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.412 01:30:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:39.412 01:30:30 -- common/autotest_common.sh@10 -- # set +x 00:22:39.412 [2024-07-27 01:30:30.863399] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:39.412 [2024-07-27 01:30:30.863480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.412 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.412 [2024-07-27 01:30:30.933070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.412 [2024-07-27 01:30:31.051393] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:39.412 [2024-07-27 01:30:31.051561] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.412 [2024-07-27 01:30:31.051582] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.412 [2024-07-27 01:30:31.051599] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.412 [2024-07-27 01:30:31.051659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.412 [2024-07-27 01:30:31.051711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.412 [2024-07-27 01:30:31.051825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.412 [2024-07-27 01:30:31.051827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.347 01:30:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:40.347 01:30:31 -- common/autotest_common.sh@852 -- # return 0 00:22:40.347 01:30:31 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.347 01:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.347 01:30:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.347 [2024-07-27 01:30:31.802465] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.347 01:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.347 01:30:31 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:40.347 01:30:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:40.347 01:30:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.347 01:30:31 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:40.347 01:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.347 01:30:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.347 Malloc0 00:22:40.347 01:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.347 01:30:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.347 01:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.347 01:30:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.347 01:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.347 01:30:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:40.347 01:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.347 01:30:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.347 01:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.347 01:30:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.347 01:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.347 01:30:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.347 [2024-07-27 01:30:31.873340] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.347 01:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.347 01:30:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:40.347 01:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.347 01:30:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.347 01:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.347 01:30:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:40.347 01:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.347 01:30:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.347 [2024-07-27 01:30:31.889107] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:40.347 [ 00:22:40.347 { 00:22:40.347 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:40.347 "subtype": "Discovery", 00:22:40.347 "listen_addresses": [ 00:22:40.347 { 00:22:40.347 "transport": "TCP", 00:22:40.347 "trtype": "TCP", 00:22:40.347 "adrfam": "IPv4", 00:22:40.347 "traddr": "10.0.0.2", 00:22:40.347 "trsvcid": "4420" 00:22:40.347 } 00:22:40.347 ], 00:22:40.347 "allow_any_host": true, 00:22:40.347 "hosts": [] 00:22:40.347 }, 00:22:40.347 { 00:22:40.347 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.347 "subtype": "NVMe", 00:22:40.347 "listen_addresses": [ 00:22:40.347 { 00:22:40.347 "transport": "TCP", 00:22:40.347 "trtype": "TCP", 00:22:40.347 "adrfam": "IPv4", 00:22:40.347 "traddr": "10.0.0.2", 00:22:40.347 "trsvcid": "4420" 00:22:40.347 } 00:22:40.347 ], 00:22:40.347 "allow_any_host": true, 00:22:40.347 "hosts": [], 00:22:40.347 "serial_number": "SPDK00000000000001", 00:22:40.347 "model_number": "SPDK bdev Controller", 00:22:40.347 "max_namespaces": 32, 00:22:40.347 "min_cntlid": 1, 00:22:40.347 "max_cntlid": 65519, 00:22:40.347 "namespaces": [ 00:22:40.347 { 00:22:40.347 "nsid": 1, 00:22:40.347 "bdev_name": "Malloc0", 00:22:40.347 "name": "Malloc0", 00:22:40.347 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:40.347 "eui64": "ABCDEF0123456789", 00:22:40.347 "uuid": "a106ec24-71db-4690-a8cc-3f3c5779b823" 00:22:40.347 } 00:22:40.347 ] 00:22:40.347 } 00:22:40.347 ] 00:22:40.347 01:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.347 01:30:31 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:40.347 [2024-07-27 01:30:31.910324] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:40.347 [2024-07-27 01:30:31.910364] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709062 ] 00:22:40.347 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.347 [2024-07-27 01:30:31.942230] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:40.347 [2024-07-27 01:30:31.942288] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:40.347 [2024-07-27 01:30:31.942299] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:40.347 [2024-07-27 01:30:31.942313] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:40.347 [2024-07-27 01:30:31.942325] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:40.347 [2024-07-27 01:30:31.946125] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:40.347 [2024-07-27 01:30:31.946200] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2300e10 0 00:22:40.347 [2024-07-27 01:30:31.953089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:40.347 [2024-07-27 01:30:31.953123] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:40.347 [2024-07-27 01:30:31.953133] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:40.347 [2024-07-27 01:30:31.953139] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:40.347 [2024-07-27 01:30:31.953205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.347 [2024-07-27 01:30:31.953219] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.347 [2024-07-27 01:30:31.953226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.347 [2024-07-27 01:30:31.953246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:40.347 [2024-07-27 01:30:31.953273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.347 [2024-07-27 01:30:31.961074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.347 [2024-07-27 01:30:31.961092] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.347 [2024-07-27 01:30:31.961100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.347 [2024-07-27 01:30:31.961114] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2380bf0) on tqpair=0x2300e10 00:22:40.347 [2024-07-27 01:30:31.961154] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:40.347 [2024-07-27 01:30:31.961168] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:40.347 [2024-07-27 01:30:31.961178] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:40.347 [2024-07-27 01:30:31.961197] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.347 [2024-07-27 01:30:31.961206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.347 [2024-07-27 01:30:31.961213] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.347 [2024-07-27 01:30:31.961224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.347 [2024-07-27 01:30:31.961248] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.347 [2024-07-27 01:30:31.961398] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.347 [2024-07-27 01:30:31.961420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.347 [2024-07-27 01:30:31.961427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.347 [2024-07-27 01:30:31.961434] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2380bf0) on tqpair=0x2300e10 00:22:40.347 [2024-07-27 01:30:31.961445] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:40.347 [2024-07-27 01:30:31.961458] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:40.347 [2024-07-27 01:30:31.961470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.347 [2024-07-27 01:30:31.961478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.961484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.348 [2024-07-27 01:30:31.961495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.348 [2024-07-27 01:30:31.961516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.348 [2024-07-27 01:30:31.961653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.348 [2024-07-27 01:30:31.961669] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.348 [2024-07-27 01:30:31.961676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.961683] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2380bf0) on tqpair=0x2300e10 00:22:40.348 [2024-07-27 01:30:31.961693] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:40.348 [2024-07-27 01:30:31.961707] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:40.348 [2024-07-27 01:30:31.961720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.961727] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.961734] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.348 [2024-07-27 01:30:31.961745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.348 [2024-07-27 01:30:31.961766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.348 [2024-07-27 01:30:31.961898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.348 [2024-07-27 01:30:31.961911] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.348 [2024-07-27 01:30:31.961918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.961925] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2380bf0) on tqpair=0x2300e10 00:22:40.348 [2024-07-27 01:30:31.961939] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:40.348 [2024-07-27 01:30:31.961956] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.961965] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.961971] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.348 [2024-07-27 01:30:31.961982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.348 [2024-07-27 01:30:31.962002] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.348 [2024-07-27 01:30:31.962145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.348 [2024-07-27 01:30:31.962159] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.348 [2024-07-27 01:30:31.962166] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.962173] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2380bf0) on tqpair=0x2300e10 00:22:40.348 [2024-07-27 01:30:31.962183] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:40.348 [2024-07-27 01:30:31.962192] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:40.348 [2024-07-27 01:30:31.962205] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:40.348 [2024-07-27 01:30:31.962315] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:40.348 [2024-07-27 01:30:31.962324] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:40.348 [2024-07-27 01:30:31.962366] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.962374] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.962380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.348 [2024-07-27 01:30:31.962390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.348 [2024-07-27 01:30:31.962411] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.348 [2024-07-27 01:30:31.962568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.348 [2024-07-27 01:30:31.962584] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.348 [2024-07-27 01:30:31.962591] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.962598] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2380bf0) on tqpair=0x2300e10 00:22:40.348 [2024-07-27 01:30:31.962608] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:40.348 [2024-07-27 01:30:31.962624] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.962633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.962640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.348 [2024-07-27 01:30:31.962651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.348 [2024-07-27 01:30:31.962671] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.348 [2024-07-27 01:30:31.962812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.348 [2024-07-27 01:30:31.962827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.348 [2024-07-27 01:30:31.962834] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.962845] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2380bf0) on tqpair=0x2300e10 00:22:40.348 [2024-07-27 01:30:31.962855] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:40.348 [2024-07-27 01:30:31.962864] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:40.348 [2024-07-27 01:30:31.962877] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:40.348 [2024-07-27 01:30:31.962891] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:40.348 [2024-07-27 01:30:31.962907] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.962915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.962922] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.348 [2024-07-27 01:30:31.962933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.348 [2024-07-27 01:30:31.962971] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.348 [2024-07-27 01:30:31.963189] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.348 [2024-07-27 01:30:31.963206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.348 [2024-07-27 01:30:31.963213] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963220] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2300e10): datao=0, datal=4096, cccid=0 00:22:40.348 [2024-07-27 01:30:31.963228] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2380bf0) on tqpair(0x2300e10): expected_datao=0, payload_size=4096 00:22:40.348 [2024-07-27 01:30:31.963240] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963249] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963284] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.348 [2024-07-27 01:30:31.963296] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.348 [2024-07-27 01:30:31.963302] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963309] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2380bf0) on tqpair=0x2300e10 00:22:40.348 [2024-07-27 01:30:31.963323] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:40.348 [2024-07-27 01:30:31.963333] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:40.348 [2024-07-27 01:30:31.963341] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:40.348 [2024-07-27 01:30:31.963350] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:40.348 [2024-07-27 01:30:31.963358] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:40.348 [2024-07-27 01:30:31.963366] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:40.348 [2024-07-27 01:30:31.963386] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:40.348 [2024-07-27 01:30:31.963400] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963408] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.348 [2024-07-27 01:30:31.963429] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.348 [2024-07-27 01:30:31.963469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.348 [2024-07-27 01:30:31.963630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.348 [2024-07-27 01:30:31.963646] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.348 [2024-07-27 01:30:31.963653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963660] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2380bf0) on tqpair=0x2300e10 00:22:40.348 [2024-07-27 01:30:31.963675] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963683] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2300e10) 00:22:40.348 [2024-07-27 01:30:31.963699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.348 [2024-07-27 01:30:31.963709] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.348 [2024-07-27 01:30:31.963716] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.963722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2300e10) 00:22:40.349 [2024-07-27 01:30:31.963731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.349 [2024-07-27 01:30:31.963741] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.963748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.963754] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2300e10) 00:22:40.349 [2024-07-27 01:30:31.963763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.349 [2024-07-27 01:30:31.963773] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.963779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.963786] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.349 [2024-07-27 01:30:31.963794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.349 [2024-07-27 01:30:31.963803] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:40.349 [2024-07-27 01:30:31.963836] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:40.349 [2024-07-27 01:30:31.963849] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.963857] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.963863] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2300e10) 00:22:40.349 [2024-07-27 01:30:31.963873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.349 [2024-07-27 01:30:31.963895] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380bf0, cid 0, qid 0 00:22:40.349 [2024-07-27 01:30:31.963921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380d50, cid 1, qid 0 00:22:40.349 [2024-07-27 01:30:31.963929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2380eb0, cid 2, qid 0 00:22:40.349 [2024-07-27 01:30:31.963936] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.349 [2024-07-27 01:30:31.963944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381170, cid 4, qid 0 00:22:40.349 [2024-07-27 01:30:31.964112] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.349 [2024-07-27 01:30:31.964133] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.349 [2024-07-27 01:30:31.964141] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381170) on tqpair=0x2300e10 00:22:40.349 [2024-07-27 01:30:31.964159] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:40.349 [2024-07-27 01:30:31.964168] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:40.349 [2024-07-27 01:30:31.964186] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964195] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964202] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2300e10) 00:22:40.349 [2024-07-27 01:30:31.964212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.349 [2024-07-27 01:30:31.964233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381170, cid 4, qid 0 00:22:40.349 [2024-07-27 01:30:31.964386] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.349 [2024-07-27 01:30:31.964401] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.349 [2024-07-27 01:30:31.964408] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964414] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2300e10): datao=0, datal=4096, cccid=4 00:22:40.349 [2024-07-27 01:30:31.964422] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2381170) on tqpair(0x2300e10): expected_datao=0, payload_size=4096 00:22:40.349 [2024-07-27 01:30:31.964433] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964441] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964480] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.349 [2024-07-27 01:30:31.964491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.349 [2024-07-27 01:30:31.964498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964504] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381170) on tqpair=0x2300e10 00:22:40.349 [2024-07-27 01:30:31.964525] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:40.349 [2024-07-27 01:30:31.964564] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964575] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964581] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2300e10) 00:22:40.349 [2024-07-27 01:30:31.964592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.349 [2024-07-27 01:30:31.964604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2300e10) 00:22:40.349 [2024-07-27 01:30:31.964627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.349 [2024-07-27 01:30:31.964668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381170, cid 4, qid 0 00:22:40.349 [2024-07-27 01:30:31.964680] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23812d0, cid 5, qid 0 00:22:40.349 [2024-07-27 01:30:31.964884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.349 [2024-07-27 01:30:31.964900] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.349 [2024-07-27 01:30:31.964907] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964918] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2300e10): datao=0, datal=1024, cccid=4 00:22:40.349 [2024-07-27 01:30:31.964927] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2381170) on tqpair(0x2300e10): expected_datao=0, payload_size=1024 00:22:40.349 [2024-07-27 01:30:31.964938] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964945] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964953] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.349 [2024-07-27 01:30:31.964962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.349 [2024-07-27 01:30:31.964969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:31.964976] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23812d0) on tqpair=0x2300e10 00:22:40.349 [2024-07-27 01:30:32.008071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.349 [2024-07-27 01:30:32.008091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.349 [2024-07-27 01:30:32.008099] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008106] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381170) on tqpair=0x2300e10 00:22:40.349 [2024-07-27 01:30:32.008127] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008144] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2300e10) 00:22:40.349 [2024-07-27 01:30:32.008155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.349 [2024-07-27 01:30:32.008186] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381170, cid 4, qid 0 00:22:40.349 [2024-07-27 01:30:32.008348] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.349 [2024-07-27 01:30:32.008361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.349 [2024-07-27 01:30:32.008368] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008375] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2300e10): datao=0, datal=3072, cccid=4 00:22:40.349 [2024-07-27 01:30:32.008382] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2381170) on tqpair(0x2300e10): expected_datao=0, payload_size=3072 00:22:40.349 [2024-07-27 01:30:32.008393] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008401] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.349 [2024-07-27 01:30:32.008463] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.349 [2024-07-27 01:30:32.008470] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008477] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381170) on tqpair=0x2300e10 00:22:40.349 [2024-07-27 01:30:32.008493] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008508] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2300e10) 00:22:40.349 [2024-07-27 01:30:32.008518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.349 [2024-07-27 01:30:32.008545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381170, cid 4, qid 0 00:22:40.349 [2024-07-27 01:30:32.008700] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.349 [2024-07-27 01:30:32.008714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.349 [2024-07-27 01:30:32.008722] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008729] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2300e10): datao=0, datal=8, cccid=4 00:22:40.349 [2024-07-27 01:30:32.008742] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2381170) on tqpair(0x2300e10): expected_datao=0, payload_size=8 00:22:40.349 [2024-07-27 01:30:32.008753] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.008762] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.049194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.349 [2024-07-27 01:30:32.049213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.349 [2024-07-27 01:30:32.049221] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.349 [2024-07-27 01:30:32.049227] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381170) on tqpair=0x2300e10 00:22:40.349 ===================================================== 00:22:40.349 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:40.350 ===================================================== 00:22:40.350 Controller Capabilities/Features 00:22:40.350 ================================ 00:22:40.350 Vendor ID: 0000 00:22:40.350 Subsystem Vendor ID: 0000 00:22:40.350 Serial Number: .................... 00:22:40.350 Model Number: ........................................ 00:22:40.350 Firmware Version: 24.01.1 00:22:40.350 Recommended Arb Burst: 0 00:22:40.350 IEEE OUI Identifier: 00 00 00 00:22:40.350 Multi-path I/O 00:22:40.350 May have multiple subsystem ports: No 00:22:40.350 May have multiple controllers: No 00:22:40.350 Associated with SR-IOV VF: No 00:22:40.350 Max Data Transfer Size: 131072 00:22:40.350 Max Number of Namespaces: 0 00:22:40.350 Max Number of I/O Queues: 1024 00:22:40.350 NVMe Specification Version (VS): 1.3 00:22:40.350 NVMe Specification Version (Identify): 1.3 00:22:40.350 Maximum Queue Entries: 128 00:22:40.350 Contiguous Queues Required: Yes 00:22:40.350 Arbitration Mechanisms Supported 00:22:40.350 Weighted Round Robin: Not Supported 00:22:40.350 Vendor Specific: Not Supported 00:22:40.350 Reset Timeout: 15000 ms 00:22:40.350 Doorbell Stride: 4 bytes 00:22:40.350 NVM Subsystem Reset: Not Supported 00:22:40.350 Command Sets Supported 00:22:40.350 NVM Command Set: Supported 00:22:40.350 Boot Partition: Not Supported 00:22:40.350 Memory Page Size Minimum: 4096 bytes 00:22:40.350 Memory Page Size Maximum: 4096 bytes 00:22:40.350 Persistent Memory Region: Not Supported 00:22:40.350 Optional Asynchronous Events Supported 00:22:40.350 Namespace Attribute Notices: Not Supported 00:22:40.350 Firmware Activation Notices: Not Supported 00:22:40.350 ANA Change Notices: Not Supported 00:22:40.350 PLE Aggregate Log Change Notices: Not Supported 00:22:40.350 LBA Status Info Alert Notices: Not Supported 00:22:40.350 EGE Aggregate Log Change Notices: Not Supported 00:22:40.350 Normal NVM Subsystem Shutdown event: Not Supported 00:22:40.350 Zone Descriptor Change Notices: Not Supported 00:22:40.350 Discovery Log Change Notices: Supported 00:22:40.350 Controller Attributes 00:22:40.350 128-bit Host Identifier: Not Supported 00:22:40.350 Non-Operational Permissive Mode: Not Supported 00:22:40.350 NVM Sets: Not Supported 00:22:40.350 Read Recovery Levels: Not Supported 00:22:40.350 Endurance Groups: Not Supported 00:22:40.350 Predictable Latency Mode: Not Supported 00:22:40.350 Traffic Based Keep ALive: Not Supported 00:22:40.350 Namespace Granularity: Not Supported 00:22:40.350 SQ Associations: Not Supported 00:22:40.350 UUID List: Not Supported 00:22:40.350 Multi-Domain Subsystem: Not Supported 00:22:40.350 Fixed Capacity Management: Not Supported 00:22:40.350 Variable Capacity Management: Not Supported 00:22:40.350 Delete Endurance Group: Not Supported 00:22:40.350 Delete NVM Set: Not Supported 00:22:40.350 Extended LBA Formats Supported: Not Supported 00:22:40.350 Flexible Data Placement Supported: Not Supported 00:22:40.350 00:22:40.350 Controller Memory Buffer Support 00:22:40.350 ================================ 00:22:40.350 Supported: No 00:22:40.350 00:22:40.350 Persistent Memory Region Support 00:22:40.350 ================================ 00:22:40.350 Supported: No 00:22:40.350 00:22:40.350 Admin Command Set Attributes 00:22:40.350 ============================ 00:22:40.350 Security Send/Receive: Not Supported 00:22:40.350 Format NVM: Not Supported 00:22:40.350 Firmware Activate/Download: Not Supported 00:22:40.350 Namespace Management: Not Supported 00:22:40.350 Device Self-Test: Not Supported 00:22:40.350 Directives: Not Supported 00:22:40.350 NVMe-MI: Not Supported 00:22:40.350 Virtualization Management: Not Supported 00:22:40.350 Doorbell Buffer Config: Not Supported 00:22:40.350 Get LBA Status Capability: Not Supported 00:22:40.350 Command & Feature Lockdown Capability: Not Supported 00:22:40.350 Abort Command Limit: 1 00:22:40.350 Async Event Request Limit: 4 00:22:40.350 Number of Firmware Slots: N/A 00:22:40.350 Firmware Slot 1 Read-Only: N/A 00:22:40.350 Firmware Activation Without Reset: N/A 00:22:40.350 Multiple Update Detection Support: N/A 00:22:40.350 Firmware Update Granularity: No Information Provided 00:22:40.350 Per-Namespace SMART Log: No 00:22:40.350 Asymmetric Namespace Access Log Page: Not Supported 00:22:40.350 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:40.350 Command Effects Log Page: Not Supported 00:22:40.350 Get Log Page Extended Data: Supported 00:22:40.350 Telemetry Log Pages: Not Supported 00:22:40.350 Persistent Event Log Pages: Not Supported 00:22:40.350 Supported Log Pages Log Page: May Support 00:22:40.350 Commands Supported & Effects Log Page: Not Supported 00:22:40.350 Feature Identifiers & Effects Log Page:May Support 00:22:40.350 NVMe-MI Commands & Effects Log Page: May Support 00:22:40.350 Data Area 4 for Telemetry Log: Not Supported 00:22:40.350 Error Log Page Entries Supported: 128 00:22:40.350 Keep Alive: Not Supported 00:22:40.350 00:22:40.350 NVM Command Set Attributes 00:22:40.350 ========================== 00:22:40.350 Submission Queue Entry Size 00:22:40.350 Max: 1 00:22:40.350 Min: 1 00:22:40.350 Completion Queue Entry Size 00:22:40.350 Max: 1 00:22:40.350 Min: 1 00:22:40.350 Number of Namespaces: 0 00:22:40.350 Compare Command: Not Supported 00:22:40.350 Write Uncorrectable Command: Not Supported 00:22:40.350 Dataset Management Command: Not Supported 00:22:40.350 Write Zeroes Command: Not Supported 00:22:40.350 Set Features Save Field: Not Supported 00:22:40.350 Reservations: Not Supported 00:22:40.350 Timestamp: Not Supported 00:22:40.350 Copy: Not Supported 00:22:40.350 Volatile Write Cache: Not Present 00:22:40.350 Atomic Write Unit (Normal): 1 00:22:40.350 Atomic Write Unit (PFail): 1 00:22:40.350 Atomic Compare & Write Unit: 1 00:22:40.350 Fused Compare & Write: Supported 00:22:40.350 Scatter-Gather List 00:22:40.350 SGL Command Set: Supported 00:22:40.350 SGL Keyed: Supported 00:22:40.350 SGL Bit Bucket Descriptor: Not Supported 00:22:40.350 SGL Metadata Pointer: Not Supported 00:22:40.350 Oversized SGL: Not Supported 00:22:40.350 SGL Metadata Address: Not Supported 00:22:40.350 SGL Offset: Supported 00:22:40.350 Transport SGL Data Block: Not Supported 00:22:40.350 Replay Protected Memory Block: Not Supported 00:22:40.350 00:22:40.350 Firmware Slot Information 00:22:40.350 ========================= 00:22:40.350 Active slot: 0 00:22:40.350 00:22:40.350 00:22:40.350 Error Log 00:22:40.350 ========= 00:22:40.350 00:22:40.350 Active Namespaces 00:22:40.350 ================= 00:22:40.350 Discovery Log Page 00:22:40.350 ================== 00:22:40.350 Generation Counter: 2 00:22:40.350 Number of Records: 2 00:22:40.350 Record Format: 0 00:22:40.350 00:22:40.350 Discovery Log Entry 0 00:22:40.350 ---------------------- 00:22:40.350 Transport Type: 3 (TCP) 00:22:40.350 Address Family: 1 (IPv4) 00:22:40.350 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:40.350 Entry Flags: 00:22:40.350 Duplicate Returned Information: 1 00:22:40.350 Explicit Persistent Connection Support for Discovery: 1 00:22:40.350 Transport Requirements: 00:22:40.350 Secure Channel: Not Required 00:22:40.350 Port ID: 0 (0x0000) 00:22:40.350 Controller ID: 65535 (0xffff) 00:22:40.350 Admin Max SQ Size: 128 00:22:40.350 Transport Service Identifier: 4420 00:22:40.350 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:40.350 Transport Address: 10.0.0.2 00:22:40.350 Discovery Log Entry 1 00:22:40.350 ---------------------- 00:22:40.350 Transport Type: 3 (TCP) 00:22:40.350 Address Family: 1 (IPv4) 00:22:40.350 Subsystem Type: 2 (NVM Subsystem) 00:22:40.350 Entry Flags: 00:22:40.350 Duplicate Returned Information: 0 00:22:40.350 Explicit Persistent Connection Support for Discovery: 0 00:22:40.350 Transport Requirements: 00:22:40.350 Secure Channel: Not Required 00:22:40.350 Port ID: 0 (0x0000) 00:22:40.350 Controller ID: 65535 (0xffff) 00:22:40.350 Admin Max SQ Size: 128 00:22:40.350 Transport Service Identifier: 4420 00:22:40.350 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:40.350 Transport Address: 10.0.0.2 [2024-07-27 01:30:32.049345] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:40.351 [2024-07-27 01:30:32.049372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.351 [2024-07-27 01:30:32.049384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.351 [2024-07-27 01:30:32.049394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.351 [2024-07-27 01:30:32.049403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.351 [2024-07-27 01:30:32.049417] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.049425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.049432] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.351 [2024-07-27 01:30:32.049443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.351 [2024-07-27 01:30:32.049483] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.351 [2024-07-27 01:30:32.049646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.351 [2024-07-27 01:30:32.049662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.351 [2024-07-27 01:30:32.049669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.049676] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.351 [2024-07-27 01:30:32.049689] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.049697] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.049703] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.351 [2024-07-27 01:30:32.049714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.351 [2024-07-27 01:30:32.049741] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.351 [2024-07-27 01:30:32.049888] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.351 [2024-07-27 01:30:32.049901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.351 [2024-07-27 01:30:32.049908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.049914] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.351 [2024-07-27 01:30:32.049924] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:40.351 [2024-07-27 01:30:32.049933] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:40.351 [2024-07-27 01:30:32.049948] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.049957] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.049968] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.351 [2024-07-27 01:30:32.049979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.351 [2024-07-27 01:30:32.050000] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.351 [2024-07-27 01:30:32.050175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.351 [2024-07-27 01:30:32.050192] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.351 [2024-07-27 01:30:32.050199] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050206] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.351 [2024-07-27 01:30:32.050225] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050234] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050241] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.351 [2024-07-27 01:30:32.050251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.351 [2024-07-27 01:30:32.050273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.351 [2024-07-27 01:30:32.050424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.351 [2024-07-27 01:30:32.050437] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.351 [2024-07-27 01:30:32.050444] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050450] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.351 [2024-07-27 01:30:32.050467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050477] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050483] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.351 [2024-07-27 01:30:32.050494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.351 [2024-07-27 01:30:32.050514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.351 [2024-07-27 01:30:32.050645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.351 [2024-07-27 01:30:32.050657] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.351 [2024-07-27 01:30:32.050664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050671] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.351 [2024-07-27 01:30:32.050688] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050697] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.351 [2024-07-27 01:30:32.050714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.351 [2024-07-27 01:30:32.050734] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.351 [2024-07-27 01:30:32.050868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.351 [2024-07-27 01:30:32.050883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.351 [2024-07-27 01:30:32.050890] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050897] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.351 [2024-07-27 01:30:32.050914] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050924] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.050930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.351 [2024-07-27 01:30:32.050945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.351 [2024-07-27 01:30:32.050967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.351 [2024-07-27 01:30:32.051108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.351 [2024-07-27 01:30:32.051123] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.351 [2024-07-27 01:30:32.051130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.051137] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.351 [2024-07-27 01:30:32.051155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.051164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.051171] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.351 [2024-07-27 01:30:32.051182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.351 [2024-07-27 01:30:32.051202] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.351 [2024-07-27 01:30:32.051357] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.351 [2024-07-27 01:30:32.051372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.351 [2024-07-27 01:30:32.051379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.051386] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.351 [2024-07-27 01:30:32.051403] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.051413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.351 [2024-07-27 01:30:32.051419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.351 [2024-07-27 01:30:32.051430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.352 [2024-07-27 01:30:32.051450] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.352 [2024-07-27 01:30:32.051605] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.352 [2024-07-27 01:30:32.051620] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.352 [2024-07-27 01:30:32.051627] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.051633] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.352 [2024-07-27 01:30:32.051651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.051660] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.051667] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.352 [2024-07-27 01:30:32.051677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.352 [2024-07-27 01:30:32.051698] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.352 [2024-07-27 01:30:32.051832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.352 [2024-07-27 01:30:32.051847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.352 [2024-07-27 01:30:32.051854] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.051860] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.352 [2024-07-27 01:30:32.051878] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.051887] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.051894] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.352 [2024-07-27 01:30:32.051904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.352 [2024-07-27 01:30:32.051929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.352 [2024-07-27 01:30:32.056071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.352 [2024-07-27 01:30:32.056090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.352 [2024-07-27 01:30:32.056097] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.056104] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.352 [2024-07-27 01:30:32.056123] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.056132] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.056139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2300e10) 00:22:40.352 [2024-07-27 01:30:32.056150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.352 [2024-07-27 01:30:32.056172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2381010, cid 3, qid 0 00:22:40.352 [2024-07-27 01:30:32.056330] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.352 [2024-07-27 01:30:32.056343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.352 [2024-07-27 01:30:32.056349] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.352 [2024-07-27 01:30:32.056356] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2381010) on tqpair=0x2300e10 00:22:40.352 [2024-07-27 01:30:32.056371] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:40.352 00:22:40.352 01:30:32 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:40.352 [2024-07-27 01:30:32.089921] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:40.352 [2024-07-27 01:30:32.089964] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709170 ] 00:22:40.352 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.614 [2024-07-27 01:30:32.124861] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:40.614 [2024-07-27 01:30:32.124908] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:40.614 [2024-07-27 01:30:32.124917] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:40.614 [2024-07-27 01:30:32.124932] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:40.615 [2024-07-27 01:30:32.124943] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:40.615 [2024-07-27 01:30:32.125174] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:40.615 [2024-07-27 01:30:32.125215] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x686e10 0 00:22:40.615 [2024-07-27 01:30:32.140091] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:40.615 [2024-07-27 01:30:32.140119] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:40.615 [2024-07-27 01:30:32.140127] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:40.615 [2024-07-27 01:30:32.140133] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:40.615 [2024-07-27 01:30:32.140185] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.140201] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.140209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.615 [2024-07-27 01:30:32.140224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:40.615 [2024-07-27 01:30:32.140250] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.615 [2024-07-27 01:30:32.148072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.615 [2024-07-27 01:30:32.148090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.615 [2024-07-27 01:30:32.148098] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148105] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x706bf0) on tqpair=0x686e10 00:22:40.615 [2024-07-27 01:30:32.148123] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:40.615 [2024-07-27 01:30:32.148135] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:40.615 [2024-07-27 01:30:32.148144] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:40.615 [2024-07-27 01:30:32.148161] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148169] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148176] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.615 [2024-07-27 01:30:32.148188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.615 [2024-07-27 01:30:32.148211] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.615 [2024-07-27 01:30:32.148368] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.615 [2024-07-27 01:30:32.148384] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.615 [2024-07-27 01:30:32.148391] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148398] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x706bf0) on tqpair=0x686e10 00:22:40.615 [2024-07-27 01:30:32.148407] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:40.615 [2024-07-27 01:30:32.148421] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:40.615 [2024-07-27 01:30:32.148433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.615 [2024-07-27 01:30:32.148459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.615 [2024-07-27 01:30:32.148480] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.615 [2024-07-27 01:30:32.148620] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.615 [2024-07-27 01:30:32.148635] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.615 [2024-07-27 01:30:32.148642] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x706bf0) on tqpair=0x686e10 00:22:40.615 [2024-07-27 01:30:32.148657] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:40.615 [2024-07-27 01:30:32.148671] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:40.615 [2024-07-27 01:30:32.148684] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148692] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148702] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.615 [2024-07-27 01:30:32.148714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.615 [2024-07-27 01:30:32.148735] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.615 [2024-07-27 01:30:32.148864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.615 [2024-07-27 01:30:32.148876] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.615 [2024-07-27 01:30:32.148883] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x706bf0) on tqpair=0x686e10 00:22:40.615 [2024-07-27 01:30:32.148899] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:40.615 [2024-07-27 01:30:32.148915] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148924] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.148931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.615 [2024-07-27 01:30:32.148941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.615 [2024-07-27 01:30:32.148962] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.615 [2024-07-27 01:30:32.149095] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.615 [2024-07-27 01:30:32.149109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.615 [2024-07-27 01:30:32.149116] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.149123] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x706bf0) on tqpair=0x686e10 00:22:40.615 [2024-07-27 01:30:32.149131] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:40.615 [2024-07-27 01:30:32.149139] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:40.615 [2024-07-27 01:30:32.149152] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:40.615 [2024-07-27 01:30:32.149262] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:40.615 [2024-07-27 01:30:32.149270] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:40.615 [2024-07-27 01:30:32.149282] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.149290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.149296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.615 [2024-07-27 01:30:32.149307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.615 [2024-07-27 01:30:32.149329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.615 [2024-07-27 01:30:32.149466] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.615 [2024-07-27 01:30:32.149481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.615 [2024-07-27 01:30:32.149488] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.149495] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x706bf0) on tqpair=0x686e10 00:22:40.615 [2024-07-27 01:30:32.149503] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:40.615 [2024-07-27 01:30:32.149520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.149529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.149539] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.615 [2024-07-27 01:30:32.149551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.615 [2024-07-27 01:30:32.149572] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.615 [2024-07-27 01:30:32.149702] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.615 [2024-07-27 01:30:32.149714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.615 [2024-07-27 01:30:32.149721] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.615 [2024-07-27 01:30:32.149728] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x706bf0) on tqpair=0x686e10 00:22:40.615 [2024-07-27 01:30:32.149736] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:40.615 [2024-07-27 01:30:32.149744] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:40.615 [2024-07-27 01:30:32.149757] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:40.615 [2024-07-27 01:30:32.149771] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.149785] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.149794] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.149800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.616 [2024-07-27 01:30:32.149811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.616 [2024-07-27 01:30:32.149832] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.616 [2024-07-27 01:30:32.150079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.616 [2024-07-27 01:30:32.150095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.616 [2024-07-27 01:30:32.150103] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150109] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686e10): datao=0, datal=4096, cccid=0 00:22:40.616 [2024-07-27 01:30:32.150117] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x706bf0) on tqpair(0x686e10): expected_datao=0, payload_size=4096 00:22:40.616 [2024-07-27 01:30:32.150129] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150137] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150174] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.616 [2024-07-27 01:30:32.150186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.616 [2024-07-27 01:30:32.150193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150199] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x706bf0) on tqpair=0x686e10 00:22:40.616 [2024-07-27 01:30:32.150211] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:40.616 [2024-07-27 01:30:32.150220] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:40.616 [2024-07-27 01:30:32.150227] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:40.616 [2024-07-27 01:30:32.150235] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:40.616 [2024-07-27 01:30:32.150242] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:40.616 [2024-07-27 01:30:32.150251] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.150273] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.150287] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150295] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150301] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.616 [2024-07-27 01:30:32.150312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.616 [2024-07-27 01:30:32.150334] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.616 [2024-07-27 01:30:32.150478] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.616 [2024-07-27 01:30:32.150491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.616 [2024-07-27 01:30:32.150498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150505] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x706bf0) on tqpair=0x686e10 00:22:40.616 [2024-07-27 01:30:32.150515] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150523] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150530] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686e10) 00:22:40.616 [2024-07-27 01:30:32.150540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.616 [2024-07-27 01:30:32.150550] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150557] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150564] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x686e10) 00:22:40.616 [2024-07-27 01:30:32.150572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.616 [2024-07-27 01:30:32.150582] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150589] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x686e10) 00:22:40.616 [2024-07-27 01:30:32.150605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.616 [2024-07-27 01:30:32.150614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150628] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686e10) 00:22:40.616 [2024-07-27 01:30:32.150652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.616 [2024-07-27 01:30:32.150661] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.150679] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.150692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150699] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.150706] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686e10) 00:22:40.616 [2024-07-27 01:30:32.150715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.616 [2024-07-27 01:30:32.150737] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706bf0, cid 0, qid 0 00:22:40.616 [2024-07-27 01:30:32.150766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706d50, cid 1, qid 0 00:22:40.616 [2024-07-27 01:30:32.150775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706eb0, cid 2, qid 0 00:22:40.616 [2024-07-27 01:30:32.150783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707010, cid 3, qid 0 00:22:40.616 [2024-07-27 01:30:32.150791] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707170, cid 4, qid 0 00:22:40.616 [2024-07-27 01:30:32.150978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.616 [2024-07-27 01:30:32.150993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.616 [2024-07-27 01:30:32.151000] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151007] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707170) on tqpair=0x686e10 00:22:40.616 [2024-07-27 01:30:32.151015] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:40.616 [2024-07-27 01:30:32.151025] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.151039] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.151079] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.151092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151100] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686e10) 00:22:40.616 [2024-07-27 01:30:32.151133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.616 [2024-07-27 01:30:32.151155] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707170, cid 4, qid 0 00:22:40.616 [2024-07-27 01:30:32.151342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.616 [2024-07-27 01:30:32.151354] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.616 [2024-07-27 01:30:32.151361] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151368] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707170) on tqpair=0x686e10 00:22:40.616 [2024-07-27 01:30:32.151434] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.151467] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.151482] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151490] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686e10) 00:22:40.616 [2024-07-27 01:30:32.151506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.616 [2024-07-27 01:30:32.151527] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707170, cid 4, qid 0 00:22:40.616 [2024-07-27 01:30:32.151696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.616 [2024-07-27 01:30:32.151711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.616 [2024-07-27 01:30:32.151719] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151725] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686e10): datao=0, datal=4096, cccid=4 00:22:40.616 [2024-07-27 01:30:32.151733] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x707170) on tqpair(0x686e10): expected_datao=0, payload_size=4096 00:22:40.616 [2024-07-27 01:30:32.151785] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151796] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.616 [2024-07-27 01:30:32.151938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.616 [2024-07-27 01:30:32.151945] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.616 [2024-07-27 01:30:32.151952] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707170) on tqpair=0x686e10 00:22:40.616 [2024-07-27 01:30:32.151974] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:40.616 [2024-07-27 01:30:32.151995] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:40.616 [2024-07-27 01:30:32.152013] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:40.617 [2024-07-27 01:30:32.152027] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.152035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.152042] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686e10) 00:22:40.617 [2024-07-27 01:30:32.152052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.617 [2024-07-27 01:30:32.156085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707170, cid 4, qid 0 00:22:40.617 [2024-07-27 01:30:32.156317] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.617 [2024-07-27 01:30:32.156333] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.617 [2024-07-27 01:30:32.156341] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.156347] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686e10): datao=0, datal=4096, cccid=4 00:22:40.617 [2024-07-27 01:30:32.156355] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x707170) on tqpair(0x686e10): expected_datao=0, payload_size=4096 00:22:40.617 [2024-07-27 01:30:32.156387] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.156397] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.156533] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.617 [2024-07-27 01:30:32.156548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.617 [2024-07-27 01:30:32.156555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.156562] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707170) on tqpair=0x686e10 00:22:40.617 [2024-07-27 01:30:32.156588] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:40.617 [2024-07-27 01:30:32.156608] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:40.617 [2024-07-27 01:30:32.156623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.156632] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.156638] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686e10) 00:22:40.617 [2024-07-27 01:30:32.156649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.617 [2024-07-27 01:30:32.156671] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707170, cid 4, qid 0 00:22:40.617 [2024-07-27 01:30:32.156817] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.617 [2024-07-27 01:30:32.156832] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.617 [2024-07-27 01:30:32.156839] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.156850] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686e10): datao=0, datal=4096, cccid=4 00:22:40.617 [2024-07-27 01:30:32.156858] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x707170) on tqpair(0x686e10): expected_datao=0, payload_size=4096 00:22:40.617 [2024-07-27 01:30:32.156909] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.156918] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157054] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.617 [2024-07-27 01:30:32.157077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.617 [2024-07-27 01:30:32.157084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707170) on tqpair=0x686e10 00:22:40.617 [2024-07-27 01:30:32.157106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:40.617 [2024-07-27 01:30:32.157121] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:40.617 [2024-07-27 01:30:32.157138] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:40.617 [2024-07-27 01:30:32.157150] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:40.617 [2024-07-27 01:30:32.157159] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:40.617 [2024-07-27 01:30:32.157169] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:40.617 [2024-07-27 01:30:32.157177] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:40.617 [2024-07-27 01:30:32.157186] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:40.617 [2024-07-27 01:30:32.157205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157221] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686e10) 00:22:40.617 [2024-07-27 01:30:32.157232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.617 [2024-07-27 01:30:32.157243] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157250] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157257] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x686e10) 00:22:40.617 [2024-07-27 01:30:32.157266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.617 [2024-07-27 01:30:32.157291] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707170, cid 4, qid 0 00:22:40.617 [2024-07-27 01:30:32.157303] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7072d0, cid 5, qid 0 00:22:40.617 [2024-07-27 01:30:32.157458] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.617 [2024-07-27 01:30:32.157473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.617 [2024-07-27 01:30:32.157480] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157487] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707170) on tqpair=0x686e10 00:22:40.617 [2024-07-27 01:30:32.157498] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.617 [2024-07-27 01:30:32.157507] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.617 [2024-07-27 01:30:32.157514] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7072d0) on tqpair=0x686e10 00:22:40.617 [2024-07-27 01:30:32.157541] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x686e10) 00:22:40.617 [2024-07-27 01:30:32.157568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.617 [2024-07-27 01:30:32.157604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7072d0, cid 5, qid 0 00:22:40.617 [2024-07-27 01:30:32.157791] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.617 [2024-07-27 01:30:32.157804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.617 [2024-07-27 01:30:32.157811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157818] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7072d0) on tqpair=0x686e10 00:22:40.617 [2024-07-27 01:30:32.157834] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157843] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.157849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x686e10) 00:22:40.617 [2024-07-27 01:30:32.157860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.617 [2024-07-27 01:30:32.157880] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7072d0, cid 5, qid 0 00:22:40.617 [2024-07-27 01:30:32.158070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.617 [2024-07-27 01:30:32.158084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.617 [2024-07-27 01:30:32.158091] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.158098] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7072d0) on tqpair=0x686e10 00:22:40.617 [2024-07-27 01:30:32.158113] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.158122] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.158129] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x686e10) 00:22:40.617 [2024-07-27 01:30:32.158140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.617 [2024-07-27 01:30:32.158160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7072d0, cid 5, qid 0 00:22:40.617 [2024-07-27 01:30:32.158298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.617 [2024-07-27 01:30:32.158311] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.617 [2024-07-27 01:30:32.158318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.158324] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7072d0) on tqpair=0x686e10 00:22:40.617 [2024-07-27 01:30:32.158344] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.158354] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.158361] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x686e10) 00:22:40.617 [2024-07-27 01:30:32.158371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.617 [2024-07-27 01:30:32.158384] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.158392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.617 [2024-07-27 01:30:32.158398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686e10) 00:22:40.617 [2024-07-27 01:30:32.158408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.618 [2024-07-27 01:30:32.158424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x686e10) 00:22:40.618 [2024-07-27 01:30:32.158448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.618 [2024-07-27 01:30:32.158477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158491] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x686e10) 00:22:40.618 [2024-07-27 01:30:32.158500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.618 [2024-07-27 01:30:32.158522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7072d0, cid 5, qid 0 00:22:40.618 [2024-07-27 01:30:32.158548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707170, cid 4, qid 0 00:22:40.618 [2024-07-27 01:30:32.158556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707430, cid 6, qid 0 00:22:40.618 [2024-07-27 01:30:32.158564] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707590, cid 7, qid 0 00:22:40.618 [2024-07-27 01:30:32.158779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.618 [2024-07-27 01:30:32.158794] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.618 [2024-07-27 01:30:32.158801] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158808] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686e10): datao=0, datal=8192, cccid=5 00:22:40.618 [2024-07-27 01:30:32.158816] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7072d0) on tqpair(0x686e10): expected_datao=0, payload_size=8192 00:22:40.618 [2024-07-27 01:30:32.158859] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158869] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.618 [2024-07-27 01:30:32.158887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.618 [2024-07-27 01:30:32.158894] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158900] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686e10): datao=0, datal=512, cccid=4 00:22:40.618 [2024-07-27 01:30:32.158908] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x707170) on tqpair(0x686e10): expected_datao=0, payload_size=512 00:22:40.618 [2024-07-27 01:30:32.158918] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158926] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158934] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.618 [2024-07-27 01:30:32.158943] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.618 [2024-07-27 01:30:32.158950] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158956] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686e10): datao=0, datal=512, cccid=6 00:22:40.618 [2024-07-27 01:30:32.158964] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x707430) on tqpair(0x686e10): expected_datao=0, payload_size=512 00:22:40.618 [2024-07-27 01:30:32.158974] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158981] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.158990] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.618 [2024-07-27 01:30:32.158999] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.618 [2024-07-27 01:30:32.159009] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.159016] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686e10): datao=0, datal=4096, cccid=7 00:22:40.618 [2024-07-27 01:30:32.159024] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x707590) on tqpair(0x686e10): expected_datao=0, payload_size=4096 00:22:40.618 [2024-07-27 01:30:32.159035] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.159042] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.159054] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.618 [2024-07-27 01:30:32.159074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.618 [2024-07-27 01:30:32.159082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.159089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x7072d0) on tqpair=0x686e10 00:22:40.618 [2024-07-27 01:30:32.159109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.618 [2024-07-27 01:30:32.159120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.618 [2024-07-27 01:30:32.159127] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.159134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707170) on tqpair=0x686e10 00:22:40.618 [2024-07-27 01:30:32.159148] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.618 [2024-07-27 01:30:32.159158] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.618 [2024-07-27 01:30:32.159165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.159172] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707430) on tqpair=0x686e10 00:22:40.618 [2024-07-27 01:30:32.159182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.618 [2024-07-27 01:30:32.159192] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.618 [2024-07-27 01:30:32.159199] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.618 [2024-07-27 01:30:32.159206] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707590) on tqpair=0x686e10 00:22:40.618 ===================================================== 00:22:40.618 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.618 ===================================================== 00:22:40.618 Controller Capabilities/Features 00:22:40.618 ================================ 00:22:40.618 Vendor ID: 8086 00:22:40.618 Subsystem Vendor ID: 8086 00:22:40.618 Serial Number: SPDK00000000000001 00:22:40.618 Model Number: SPDK bdev Controller 00:22:40.618 Firmware Version: 24.01.1 00:22:40.618 Recommended Arb Burst: 6 00:22:40.618 IEEE OUI Identifier: e4 d2 5c 00:22:40.618 Multi-path I/O 00:22:40.618 May have multiple subsystem ports: Yes 00:22:40.618 May have multiple controllers: Yes 00:22:40.618 Associated with SR-IOV VF: No 00:22:40.618 Max Data Transfer Size: 131072 00:22:40.618 Max Number of Namespaces: 32 00:22:40.618 Max Number of I/O Queues: 127 00:22:40.618 NVMe Specification Version (VS): 1.3 00:22:40.618 NVMe Specification Version (Identify): 1.3 00:22:40.618 Maximum Queue Entries: 128 00:22:40.618 Contiguous Queues Required: Yes 00:22:40.618 Arbitration Mechanisms Supported 00:22:40.618 Weighted Round Robin: Not Supported 00:22:40.618 Vendor Specific: Not Supported 00:22:40.618 Reset Timeout: 15000 ms 00:22:40.618 Doorbell Stride: 4 bytes 00:22:40.618 NVM Subsystem Reset: Not Supported 00:22:40.618 Command Sets Supported 00:22:40.618 NVM Command Set: Supported 00:22:40.618 Boot Partition: Not Supported 00:22:40.618 Memory Page Size Minimum: 4096 bytes 00:22:40.618 Memory Page Size Maximum: 4096 bytes 00:22:40.618 Persistent Memory Region: Not Supported 00:22:40.618 Optional Asynchronous Events Supported 00:22:40.618 Namespace Attribute Notices: Supported 00:22:40.618 Firmware Activation Notices: Not Supported 00:22:40.618 ANA Change Notices: Not Supported 00:22:40.618 PLE Aggregate Log Change Notices: Not Supported 00:22:40.618 LBA Status Info Alert Notices: Not Supported 00:22:40.618 EGE Aggregate Log Change Notices: Not Supported 00:22:40.618 Normal NVM Subsystem Shutdown event: Not Supported 00:22:40.618 Zone Descriptor Change Notices: Not Supported 00:22:40.618 Discovery Log Change Notices: Not Supported 00:22:40.618 Controller Attributes 00:22:40.618 128-bit Host Identifier: Supported 00:22:40.618 Non-Operational Permissive Mode: Not Supported 00:22:40.618 NVM Sets: Not Supported 00:22:40.618 Read Recovery Levels: Not Supported 00:22:40.618 Endurance Groups: Not Supported 00:22:40.618 Predictable Latency Mode: Not Supported 00:22:40.618 Traffic Based Keep ALive: Not Supported 00:22:40.618 Namespace Granularity: Not Supported 00:22:40.618 SQ Associations: Not Supported 00:22:40.618 UUID List: Not Supported 00:22:40.618 Multi-Domain Subsystem: Not Supported 00:22:40.618 Fixed Capacity Management: Not Supported 00:22:40.618 Variable Capacity Management: Not Supported 00:22:40.618 Delete Endurance Group: Not Supported 00:22:40.618 Delete NVM Set: Not Supported 00:22:40.618 Extended LBA Formats Supported: Not Supported 00:22:40.618 Flexible Data Placement Supported: Not Supported 00:22:40.618 00:22:40.618 Controller Memory Buffer Support 00:22:40.618 ================================ 00:22:40.618 Supported: No 00:22:40.618 00:22:40.618 Persistent Memory Region Support 00:22:40.618 ================================ 00:22:40.618 Supported: No 00:22:40.618 00:22:40.618 Admin Command Set Attributes 00:22:40.618 ============================ 00:22:40.618 Security Send/Receive: Not Supported 00:22:40.618 Format NVM: Not Supported 00:22:40.618 Firmware Activate/Download: Not Supported 00:22:40.618 Namespace Management: Not Supported 00:22:40.618 Device Self-Test: Not Supported 00:22:40.618 Directives: Not Supported 00:22:40.618 NVMe-MI: Not Supported 00:22:40.618 Virtualization Management: Not Supported 00:22:40.618 Doorbell Buffer Config: Not Supported 00:22:40.618 Get LBA Status Capability: Not Supported 00:22:40.619 Command & Feature Lockdown Capability: Not Supported 00:22:40.619 Abort Command Limit: 4 00:22:40.619 Async Event Request Limit: 4 00:22:40.619 Number of Firmware Slots: N/A 00:22:40.619 Firmware Slot 1 Read-Only: N/A 00:22:40.619 Firmware Activation Without Reset: N/A 00:22:40.619 Multiple Update Detection Support: N/A 00:22:40.619 Firmware Update Granularity: No Information Provided 00:22:40.619 Per-Namespace SMART Log: No 00:22:40.619 Asymmetric Namespace Access Log Page: Not Supported 00:22:40.619 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:40.619 Command Effects Log Page: Supported 00:22:40.619 Get Log Page Extended Data: Supported 00:22:40.619 Telemetry Log Pages: Not Supported 00:22:40.619 Persistent Event Log Pages: Not Supported 00:22:40.619 Supported Log Pages Log Page: May Support 00:22:40.619 Commands Supported & Effects Log Page: Not Supported 00:22:40.619 Feature Identifiers & Effects Log Page:May Support 00:22:40.619 NVMe-MI Commands & Effects Log Page: May Support 00:22:40.619 Data Area 4 for Telemetry Log: Not Supported 00:22:40.619 Error Log Page Entries Supported: 128 00:22:40.619 Keep Alive: Supported 00:22:40.619 Keep Alive Granularity: 10000 ms 00:22:40.619 00:22:40.619 NVM Command Set Attributes 00:22:40.619 ========================== 00:22:40.619 Submission Queue Entry Size 00:22:40.619 Max: 64 00:22:40.619 Min: 64 00:22:40.619 Completion Queue Entry Size 00:22:40.619 Max: 16 00:22:40.619 Min: 16 00:22:40.619 Number of Namespaces: 32 00:22:40.619 Compare Command: Supported 00:22:40.619 Write Uncorrectable Command: Not Supported 00:22:40.619 Dataset Management Command: Supported 00:22:40.619 Write Zeroes Command: Supported 00:22:40.619 Set Features Save Field: Not Supported 00:22:40.619 Reservations: Supported 00:22:40.619 Timestamp: Not Supported 00:22:40.619 Copy: Supported 00:22:40.619 Volatile Write Cache: Present 00:22:40.619 Atomic Write Unit (Normal): 1 00:22:40.619 Atomic Write Unit (PFail): 1 00:22:40.619 Atomic Compare & Write Unit: 1 00:22:40.619 Fused Compare & Write: Supported 00:22:40.619 Scatter-Gather List 00:22:40.619 SGL Command Set: Supported 00:22:40.619 SGL Keyed: Supported 00:22:40.619 SGL Bit Bucket Descriptor: Not Supported 00:22:40.619 SGL Metadata Pointer: Not Supported 00:22:40.619 Oversized SGL: Not Supported 00:22:40.619 SGL Metadata Address: Not Supported 00:22:40.619 SGL Offset: Supported 00:22:40.619 Transport SGL Data Block: Not Supported 00:22:40.619 Replay Protected Memory Block: Not Supported 00:22:40.619 00:22:40.619 Firmware Slot Information 00:22:40.619 ========================= 00:22:40.619 Active slot: 1 00:22:40.619 Slot 1 Firmware Revision: 24.01.1 00:22:40.619 00:22:40.619 00:22:40.619 Commands Supported and Effects 00:22:40.619 ============================== 00:22:40.619 Admin Commands 00:22:40.619 -------------- 00:22:40.619 Get Log Page (02h): Supported 00:22:40.619 Identify (06h): Supported 00:22:40.619 Abort (08h): Supported 00:22:40.619 Set Features (09h): Supported 00:22:40.619 Get Features (0Ah): Supported 00:22:40.619 Asynchronous Event Request (0Ch): Supported 00:22:40.619 Keep Alive (18h): Supported 00:22:40.619 I/O Commands 00:22:40.619 ------------ 00:22:40.619 Flush (00h): Supported LBA-Change 00:22:40.619 Write (01h): Supported LBA-Change 00:22:40.619 Read (02h): Supported 00:22:40.619 Compare (05h): Supported 00:22:40.619 Write Zeroes (08h): Supported LBA-Change 00:22:40.619 Dataset Management (09h): Supported LBA-Change 00:22:40.619 Copy (19h): Supported LBA-Change 00:22:40.619 Unknown (79h): Supported LBA-Change 00:22:40.619 Unknown (7Ah): Supported 00:22:40.619 00:22:40.619 Error Log 00:22:40.619 ========= 00:22:40.619 00:22:40.619 Arbitration 00:22:40.619 =========== 00:22:40.619 Arbitration Burst: 1 00:22:40.619 00:22:40.619 Power Management 00:22:40.619 ================ 00:22:40.619 Number of Power States: 1 00:22:40.619 Current Power State: Power State #0 00:22:40.619 Power State #0: 00:22:40.619 Max Power: 0.00 W 00:22:40.619 Non-Operational State: Operational 00:22:40.619 Entry Latency: Not Reported 00:22:40.619 Exit Latency: Not Reported 00:22:40.619 Relative Read Throughput: 0 00:22:40.619 Relative Read Latency: 0 00:22:40.619 Relative Write Throughput: 0 00:22:40.619 Relative Write Latency: 0 00:22:40.619 Idle Power: Not Reported 00:22:40.619 Active Power: Not Reported 00:22:40.619 Non-Operational Permissive Mode: Not Supported 00:22:40.619 00:22:40.619 Health Information 00:22:40.619 ================== 00:22:40.619 Critical Warnings: 00:22:40.619 Available Spare Space: OK 00:22:40.619 Temperature: OK 00:22:40.619 Device Reliability: OK 00:22:40.619 Read Only: No 00:22:40.619 Volatile Memory Backup: OK 00:22:40.619 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:40.619 Temperature Threshold: [2024-07-27 01:30:32.159328] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.619 [2024-07-27 01:30:32.159341] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.619 [2024-07-27 01:30:32.159362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x686e10) 00:22:40.619 [2024-07-27 01:30:32.159373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.619 [2024-07-27 01:30:32.159396] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707590, cid 7, qid 0 00:22:40.619 [2024-07-27 01:30:32.159604] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.619 [2024-07-27 01:30:32.159620] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.619 [2024-07-27 01:30:32.159628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.619 [2024-07-27 01:30:32.159634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707590) on tqpair=0x686e10 00:22:40.619 [2024-07-27 01:30:32.159676] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:40.619 [2024-07-27 01:30:32.159698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.619 [2024-07-27 01:30:32.159726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.619 [2024-07-27 01:30:32.159735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.619 [2024-07-27 01:30:32.159745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.619 [2024-07-27 01:30:32.159757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.619 [2024-07-27 01:30:32.159768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.619 [2024-07-27 01:30:32.159776] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686e10) 00:22:40.619 [2024-07-27 01:30:32.159786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.620 [2024-07-27 01:30:32.159808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707010, cid 3, qid 0 00:22:40.620 [2024-07-27 01:30:32.159963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.620 [2024-07-27 01:30:32.159979] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.620 [2024-07-27 01:30:32.159986] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.620 [2024-07-27 01:30:32.159993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707010) on tqpair=0x686e10 00:22:40.620 [2024-07-27 01:30:32.160004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.620 [2024-07-27 01:30:32.160012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.620 [2024-07-27 01:30:32.160019] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686e10) 00:22:40.620 [2024-07-27 01:30:32.160029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.620 [2024-07-27 01:30:32.160056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707010, cid 3, qid 0 00:22:40.620 [2024-07-27 01:30:32.164082] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.620 [2024-07-27 01:30:32.164094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.620 [2024-07-27 01:30:32.164101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.620 [2024-07-27 01:30:32.164108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707010) on tqpair=0x686e10 00:22:40.620 [2024-07-27 01:30:32.164115] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:40.620 [2024-07-27 01:30:32.164123] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:40.620 [2024-07-27 01:30:32.164154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.620 [2024-07-27 01:30:32.164165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.620 [2024-07-27 01:30:32.164171] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686e10) 00:22:40.620 [2024-07-27 01:30:32.164182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.620 [2024-07-27 01:30:32.164205] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x707010, cid 3, qid 0 00:22:40.620 [2024-07-27 01:30:32.164357] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.620 [2024-07-27 01:30:32.164372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.620 [2024-07-27 01:30:32.164379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.620 [2024-07-27 01:30:32.164386] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x707010) on tqpair=0x686e10 00:22:40.620 [2024-07-27 01:30:32.164399] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:22:40.620 0 Kelvin (-273 Celsius) 00:22:40.620 Available Spare: 0% 00:22:40.620 Available Spare Threshold: 0% 00:22:40.620 Life Percentage Used: 0% 00:22:40.620 Data Units Read: 0 00:22:40.620 Data Units Written: 0 00:22:40.620 Host Read Commands: 0 00:22:40.620 Host Write Commands: 0 00:22:40.620 Controller Busy Time: 0 minutes 00:22:40.620 Power Cycles: 0 00:22:40.620 Power On Hours: 0 hours 00:22:40.620 Unsafe Shutdowns: 0 00:22:40.620 Unrecoverable Media Errors: 0 00:22:40.620 Lifetime Error Log Entries: 0 00:22:40.620 Warning Temperature Time: 0 minutes 00:22:40.620 Critical Temperature Time: 0 minutes 00:22:40.620 00:22:40.620 Number of Queues 00:22:40.620 ================ 00:22:40.620 Number of I/O Submission Queues: 127 00:22:40.620 Number of I/O Completion Queues: 127 00:22:40.620 00:22:40.620 Active Namespaces 00:22:40.620 ================= 00:22:40.620 Namespace ID:1 00:22:40.620 Error Recovery Timeout: Unlimited 00:22:40.620 Command Set Identifier: NVM (00h) 00:22:40.620 Deallocate: Supported 00:22:40.620 Deallocated/Unwritten Error: Not Supported 00:22:40.620 Deallocated Read Value: Unknown 00:22:40.620 Deallocate in Write Zeroes: Not Supported 00:22:40.620 Deallocated Guard Field: 0xFFFF 00:22:40.620 Flush: Supported 00:22:40.620 Reservation: Supported 00:22:40.620 Namespace Sharing Capabilities: Multiple Controllers 00:22:40.620 Size (in LBAs): 131072 (0GiB) 00:22:40.620 Capacity (in LBAs): 131072 (0GiB) 00:22:40.620 Utilization (in LBAs): 131072 (0GiB) 00:22:40.620 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:40.620 EUI64: ABCDEF0123456789 00:22:40.620 UUID: a106ec24-71db-4690-a8cc-3f3c5779b823 00:22:40.620 Thin Provisioning: Not Supported 00:22:40.620 Per-NS Atomic Units: Yes 00:22:40.620 Atomic Boundary Size (Normal): 0 00:22:40.620 Atomic Boundary Size (PFail): 0 00:22:40.620 Atomic Boundary Offset: 0 00:22:40.620 Maximum Single Source Range Length: 65535 00:22:40.620 Maximum Copy Length: 65535 00:22:40.620 Maximum Source Range Count: 1 00:22:40.620 NGUID/EUI64 Never Reused: No 00:22:40.620 Namespace Write Protected: No 00:22:40.620 Number of LBA Formats: 1 00:22:40.620 Current LBA Format: LBA Format #00 00:22:40.620 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:40.620 00:22:40.620 01:30:32 -- host/identify.sh@51 -- # sync 00:22:40.620 01:30:32 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.620 01:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.620 01:30:32 -- common/autotest_common.sh@10 -- # set +x 00:22:40.620 01:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.620 01:30:32 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:40.620 01:30:32 -- host/identify.sh@56 -- # nvmftestfini 00:22:40.620 01:30:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:40.620 01:30:32 -- nvmf/common.sh@116 -- # sync 00:22:40.620 01:30:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:40.620 01:30:32 -- nvmf/common.sh@119 -- # set +e 00:22:40.620 01:30:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:40.620 01:30:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:40.620 rmmod nvme_tcp 00:22:40.620 rmmod nvme_fabrics 00:22:40.620 rmmod nvme_keyring 00:22:40.620 01:30:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:40.620 01:30:32 -- nvmf/common.sh@123 -- # set -e 00:22:40.620 01:30:32 -- nvmf/common.sh@124 -- # return 0 00:22:40.620 01:30:32 -- nvmf/common.sh@477 -- # '[' -n 708903 ']' 00:22:40.620 01:30:32 -- nvmf/common.sh@478 -- # killprocess 708903 00:22:40.620 01:30:32 -- common/autotest_common.sh@926 -- # '[' -z 708903 ']' 00:22:40.620 01:30:32 -- common/autotest_common.sh@930 -- # kill -0 708903 00:22:40.620 01:30:32 -- common/autotest_common.sh@931 -- # uname 00:22:40.620 01:30:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:40.620 01:30:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 708903 00:22:40.620 01:30:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:40.620 01:30:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:40.620 01:30:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 708903' 00:22:40.620 killing process with pid 708903 00:22:40.620 01:30:32 -- common/autotest_common.sh@945 -- # kill 708903 00:22:40.620 [2024-07-27 01:30:32.267282] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:40.620 01:30:32 -- common/autotest_common.sh@950 -- # wait 708903 00:22:40.878 01:30:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:40.878 01:30:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:40.878 01:30:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:40.878 01:30:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.878 01:30:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:40.878 01:30:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.878 01:30:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.878 01:30:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.411 01:30:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:43.411 00:22:43.411 real 0m5.820s 00:22:43.411 user 0m6.721s 00:22:43.411 sys 0m1.777s 00:22:43.411 01:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.411 01:30:34 -- common/autotest_common.sh@10 -- # set +x 00:22:43.411 ************************************ 00:22:43.411 END TEST nvmf_identify 00:22:43.411 ************************************ 00:22:43.411 01:30:34 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:43.411 01:30:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:43.411 01:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:43.411 01:30:34 -- common/autotest_common.sh@10 -- # set +x 00:22:43.411 ************************************ 00:22:43.411 START TEST nvmf_perf 00:22:43.411 ************************************ 00:22:43.411 01:30:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:43.411 * Looking for test storage... 00:22:43.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.411 01:30:34 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.411 01:30:34 -- nvmf/common.sh@7 -- # uname -s 00:22:43.411 01:30:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.411 01:30:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.411 01:30:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.411 01:30:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.411 01:30:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.411 01:30:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.411 01:30:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.411 01:30:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.411 01:30:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.411 01:30:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.411 01:30:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.411 01:30:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.411 01:30:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.411 01:30:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.411 01:30:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.411 01:30:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.411 01:30:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.411 01:30:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.411 01:30:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.411 01:30:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.411 01:30:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.411 01:30:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.411 01:30:34 -- paths/export.sh@5 -- # export PATH 00:22:43.411 01:30:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.411 01:30:34 -- nvmf/common.sh@46 -- # : 0 00:22:43.411 01:30:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:43.411 01:30:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:43.411 01:30:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:43.411 01:30:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.411 01:30:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.411 01:30:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:43.411 01:30:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:43.411 01:30:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:43.411 01:30:34 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:43.411 01:30:34 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:43.411 01:30:34 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:43.411 01:30:34 -- host/perf.sh@17 -- # nvmftestinit 00:22:43.411 01:30:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:43.411 01:30:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.411 01:30:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:43.411 01:30:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:43.411 01:30:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:43.411 01:30:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.411 01:30:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.411 01:30:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.411 01:30:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:43.411 01:30:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:43.411 01:30:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:43.411 01:30:34 -- common/autotest_common.sh@10 -- # set +x 00:22:45.313 01:30:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:45.313 01:30:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:45.313 01:30:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:45.313 01:30:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:45.313 01:30:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:45.313 01:30:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:45.313 01:30:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:45.313 01:30:36 -- nvmf/common.sh@294 -- # net_devs=() 00:22:45.313 01:30:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:45.313 01:30:36 -- nvmf/common.sh@295 -- # e810=() 00:22:45.313 01:30:36 -- nvmf/common.sh@295 -- # local -ga e810 00:22:45.313 01:30:36 -- nvmf/common.sh@296 -- # x722=() 00:22:45.313 01:30:36 -- nvmf/common.sh@296 -- # local -ga x722 00:22:45.313 01:30:36 -- nvmf/common.sh@297 -- # mlx=() 00:22:45.313 01:30:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:45.313 01:30:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.313 01:30:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:45.313 01:30:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:45.313 01:30:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:45.313 01:30:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:45.313 01:30:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:45.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:45.313 01:30:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:45.313 01:30:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:45.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:45.313 01:30:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:45.313 01:30:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:45.313 01:30:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.313 01:30:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:45.313 01:30:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.313 01:30:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:45.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:45.313 01:30:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.313 01:30:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:45.313 01:30:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.313 01:30:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:45.313 01:30:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.313 01:30:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:45.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:45.313 01:30:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.313 01:30:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:45.313 01:30:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:45.313 01:30:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:45.313 01:30:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.313 01:30:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.313 01:30:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.313 01:30:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:45.313 01:30:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.313 01:30:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.313 01:30:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:45.313 01:30:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.313 01:30:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.313 01:30:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:45.313 01:30:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:45.313 01:30:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.313 01:30:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.313 01:30:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.313 01:30:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.313 01:30:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:45.313 01:30:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.313 01:30:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.313 01:30:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.313 01:30:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:45.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:45.313 00:22:45.313 --- 10.0.0.2 ping statistics --- 00:22:45.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.313 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:45.313 01:30:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:22:45.313 00:22:45.313 --- 10.0.0.1 ping statistics --- 00:22:45.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.313 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:45.313 01:30:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.313 01:30:36 -- nvmf/common.sh@410 -- # return 0 00:22:45.313 01:30:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:45.313 01:30:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.313 01:30:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:45.313 01:30:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.313 01:30:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:45.313 01:30:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:45.313 01:30:36 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:45.313 01:30:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:45.313 01:30:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:45.313 01:30:36 -- common/autotest_common.sh@10 -- # set +x 00:22:45.313 01:30:36 -- nvmf/common.sh@469 -- # nvmfpid=711128 00:22:45.313 01:30:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:45.313 01:30:36 -- nvmf/common.sh@470 -- # waitforlisten 711128 00:22:45.313 01:30:36 -- common/autotest_common.sh@819 -- # '[' -z 711128 ']' 00:22:45.313 01:30:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.313 01:30:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:45.313 01:30:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.313 01:30:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:45.313 01:30:36 -- common/autotest_common.sh@10 -- # set +x 00:22:45.313 [2024-07-27 01:30:36.859926] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:45.313 [2024-07-27 01:30:36.860010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.313 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.313 [2024-07-27 01:30:36.923599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.313 [2024-07-27 01:30:37.028662] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:45.313 [2024-07-27 01:30:37.028817] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.314 [2024-07-27 01:30:37.028833] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.314 [2024-07-27 01:30:37.028846] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.314 [2024-07-27 01:30:37.028897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.314 [2024-07-27 01:30:37.028966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.314 [2024-07-27 01:30:37.029032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.314 [2024-07-27 01:30:37.029034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.248 01:30:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:46.248 01:30:37 -- common/autotest_common.sh@852 -- # return 0 00:22:46.248 01:30:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:46.248 01:30:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:46.248 01:30:37 -- common/autotest_common.sh@10 -- # set +x 00:22:46.248 01:30:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.248 01:30:37 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:46.248 01:30:37 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:49.530 01:30:40 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:49.530 01:30:40 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:49.530 01:30:41 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:49.530 01:30:41 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:49.788 01:30:41 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:49.788 01:30:41 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:49.788 01:30:41 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:49.788 01:30:41 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:49.788 01:30:41 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.081 [2024-07-27 01:30:41.769895] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.081 01:30:41 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.339 01:30:42 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:50.339 01:30:42 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.597 01:30:42 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:50.598 01:30:42 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:50.856 01:30:42 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.114 [2024-07-27 01:30:42.721490] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.114 01:30:42 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:51.373 01:30:42 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:51.373 01:30:42 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:51.373 01:30:42 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:51.373 01:30:42 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:52.760 Initializing NVMe Controllers 00:22:52.760 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:52.760 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:52.760 Initialization complete. Launching workers. 00:22:52.760 ======================================================== 00:22:52.760 Latency(us) 00:22:52.760 Device Information : IOPS MiB/s Average min max 00:22:52.760 PCIE (0000:88:00.0) NSID 1 from core 0: 87774.00 342.87 364.12 45.27 4440.85 00:22:52.760 ======================================================== 00:22:52.760 Total : 87774.00 342.87 364.12 45.27 4440.85 00:22:52.760 00:22:52.760 01:30:44 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:52.760 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.137 Initializing NVMe Controllers 00:22:54.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:54.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:54.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:54.137 Initialization complete. Launching workers. 00:22:54.137 ======================================================== 00:22:54.137 Latency(us) 00:22:54.137 Device Information : IOPS MiB/s Average min max 00:22:54.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.00 0.37 10722.24 215.08 46445.00 00:22:54.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.00 0.25 16245.19 4976.48 50879.21 00:22:54.137 ======================================================== 00:22:54.137 Total : 158.00 0.62 12924.43 215.08 50879.21 00:22:54.137 00:22:54.137 01:30:45 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:54.137 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.513 Initializing NVMe Controllers 00:22:55.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:55.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:55.513 Initialization complete. Launching workers. 00:22:55.513 ======================================================== 00:22:55.513 Latency(us) 00:22:55.513 Device Information : IOPS MiB/s Average min max 00:22:55.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5948.45 23.24 5381.83 892.54 12395.60 00:22:55.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3876.12 15.14 8268.39 6312.97 15714.99 00:22:55.513 ======================================================== 00:22:55.513 Total : 9824.57 38.38 6520.67 892.54 15714.99 00:22:55.513 00:22:55.513 01:30:47 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:55.513 01:30:47 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:55.513 01:30:47 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:55.513 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.805 Initializing NVMe Controllers 00:22:58.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.805 Controller IO queue size 128, less than required. 00:22:58.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.805 Controller IO queue size 128, less than required. 00:22:58.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:58.805 Initialization complete. Launching workers. 00:22:58.805 ======================================================== 00:22:58.805 Latency(us) 00:22:58.805 Device Information : IOPS MiB/s Average min max 00:22:58.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 746.49 186.62 177185.24 97842.29 255456.39 00:22:58.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.49 147.62 229533.96 110339.46 359180.58 00:22:58.805 ======================================================== 00:22:58.805 Total : 1336.98 334.25 200305.60 97842.29 359180.58 00:22:58.805 00:22:58.806 01:30:49 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:58.806 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.806 No valid NVMe controllers or AIO or URING devices found 00:22:58.806 Initializing NVMe Controllers 00:22:58.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.806 Controller IO queue size 128, less than required. 00:22:58.806 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.806 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:58.806 Controller IO queue size 128, less than required. 00:22:58.806 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.806 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:58.806 WARNING: Some requested NVMe devices were skipped 00:22:58.806 01:30:50 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:58.806 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.340 Initializing NVMe Controllers 00:23:01.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.340 Controller IO queue size 128, less than required. 00:23:01.340 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.340 Controller IO queue size 128, less than required. 00:23:01.340 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:01.340 Initialization complete. Launching workers. 00:23:01.340 00:23:01.340 ==================== 00:23:01.340 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:01.340 TCP transport: 00:23:01.340 polls: 29032 00:23:01.340 idle_polls: 10951 00:23:01.340 sock_completions: 18081 00:23:01.340 nvme_completions: 3695 00:23:01.340 submitted_requests: 5763 00:23:01.340 queued_requests: 1 00:23:01.340 00:23:01.340 ==================== 00:23:01.340 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:01.340 TCP transport: 00:23:01.340 polls: 29295 00:23:01.340 idle_polls: 11867 00:23:01.340 sock_completions: 17428 00:23:01.340 nvme_completions: 3561 00:23:01.340 submitted_requests: 5507 00:23:01.340 queued_requests: 1 00:23:01.340 ======================================================== 00:23:01.340 Latency(us) 00:23:01.340 Device Information : IOPS MiB/s Average min max 00:23:01.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 986.72 246.68 134063.66 79319.10 229473.20 00:23:01.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 953.73 238.43 138368.77 61443.17 196085.64 00:23:01.340 ======================================================== 00:23:01.340 Total : 1940.45 485.11 136179.62 61443.17 229473.20 00:23:01.340 00:23:01.340 01:30:52 -- host/perf.sh@66 -- # sync 00:23:01.340 01:30:52 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:01.340 01:30:52 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:01.340 01:30:52 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:23:01.340 01:30:52 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:04.625 01:30:55 -- host/perf.sh@72 -- # ls_guid=ccd21cda-8721-4173-b1d7-c34545023ffd 00:23:04.625 01:30:55 -- host/perf.sh@73 -- # get_lvs_free_mb ccd21cda-8721-4173-b1d7-c34545023ffd 00:23:04.625 01:30:55 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ccd21cda-8721-4173-b1d7-c34545023ffd 00:23:04.625 01:30:55 -- common/autotest_common.sh@1344 -- # local lvs_info 00:23:04.625 01:30:55 -- common/autotest_common.sh@1345 -- # local fc 00:23:04.625 01:30:55 -- common/autotest_common.sh@1346 -- # local cs 00:23:04.625 01:30:55 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:04.625 01:30:56 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:23:04.625 { 00:23:04.625 "uuid": "ccd21cda-8721-4173-b1d7-c34545023ffd", 00:23:04.625 "name": "lvs_0", 00:23:04.625 "base_bdev": "Nvme0n1", 00:23:04.625 "total_data_clusters": 238234, 00:23:04.625 "free_clusters": 238234, 00:23:04.625 "block_size": 512, 00:23:04.625 "cluster_size": 4194304 00:23:04.625 } 00:23:04.625 ]' 00:23:04.625 01:30:56 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ccd21cda-8721-4173-b1d7-c34545023ffd") .free_clusters' 00:23:04.625 01:30:56 -- common/autotest_common.sh@1348 -- # fc=238234 00:23:04.625 01:30:56 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ccd21cda-8721-4173-b1d7-c34545023ffd") .cluster_size' 00:23:04.625 01:30:56 -- common/autotest_common.sh@1349 -- # cs=4194304 00:23:04.625 01:30:56 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:23:04.625 01:30:56 -- common/autotest_common.sh@1353 -- # echo 952936 00:23:04.625 952936 00:23:04.625 01:30:56 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:23:04.625 01:30:56 -- host/perf.sh@78 -- # free_mb=20480 00:23:04.625 01:30:56 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ccd21cda-8721-4173-b1d7-c34545023ffd lbd_0 20480 00:23:05.193 01:30:56 -- host/perf.sh@80 -- # lb_guid=7c0019cd-dfb9-4c66-95d6-c7c434beb3a3 00:23:05.193 01:30:56 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7c0019cd-dfb9-4c66-95d6-c7c434beb3a3 lvs_n_0 00:23:06.126 01:30:57 -- host/perf.sh@83 -- # ls_nested_guid=2b2e924e-5b05-4714-b2fe-2ea44aeb3e1f 00:23:06.126 01:30:57 -- host/perf.sh@84 -- # get_lvs_free_mb 2b2e924e-5b05-4714-b2fe-2ea44aeb3e1f 00:23:06.126 01:30:57 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2b2e924e-5b05-4714-b2fe-2ea44aeb3e1f 00:23:06.126 01:30:57 -- common/autotest_common.sh@1344 -- # local lvs_info 00:23:06.126 01:30:57 -- common/autotest_common.sh@1345 -- # local fc 00:23:06.126 01:30:57 -- common/autotest_common.sh@1346 -- # local cs 00:23:06.126 01:30:57 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:06.384 01:30:57 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:23:06.384 { 00:23:06.384 "uuid": "ccd21cda-8721-4173-b1d7-c34545023ffd", 00:23:06.384 "name": "lvs_0", 00:23:06.384 "base_bdev": "Nvme0n1", 00:23:06.384 "total_data_clusters": 238234, 00:23:06.384 "free_clusters": 233114, 00:23:06.384 "block_size": 512, 00:23:06.384 "cluster_size": 4194304 00:23:06.384 }, 00:23:06.384 { 00:23:06.384 "uuid": "2b2e924e-5b05-4714-b2fe-2ea44aeb3e1f", 00:23:06.384 "name": "lvs_n_0", 00:23:06.384 "base_bdev": "7c0019cd-dfb9-4c66-95d6-c7c434beb3a3", 00:23:06.384 "total_data_clusters": 5114, 00:23:06.384 "free_clusters": 5114, 00:23:06.384 "block_size": 512, 00:23:06.384 "cluster_size": 4194304 00:23:06.384 } 00:23:06.384 ]' 00:23:06.384 01:30:57 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2b2e924e-5b05-4714-b2fe-2ea44aeb3e1f") .free_clusters' 00:23:06.384 01:30:57 -- common/autotest_common.sh@1348 -- # fc=5114 00:23:06.384 01:30:57 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2b2e924e-5b05-4714-b2fe-2ea44aeb3e1f") .cluster_size' 00:23:06.384 01:30:57 -- common/autotest_common.sh@1349 -- # cs=4194304 00:23:06.384 01:30:57 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:23:06.384 01:30:57 -- common/autotest_common.sh@1353 -- # echo 20456 00:23:06.384 20456 00:23:06.384 01:30:57 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:23:06.384 01:30:57 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2b2e924e-5b05-4714-b2fe-2ea44aeb3e1f lbd_nest_0 20456 00:23:06.642 01:30:58 -- host/perf.sh@88 -- # lb_nested_guid=3ca78100-1543-4d54-b5aa-097383d2a928 00:23:06.642 01:30:58 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:06.899 01:30:58 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:06.899 01:30:58 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 3ca78100-1543-4d54-b5aa-097383d2a928 00:23:07.157 01:30:58 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.415 01:30:58 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:07.415 01:30:58 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:07.415 01:30:58 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:07.415 01:30:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:07.415 01:30:58 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:07.415 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.672 Initializing NVMe Controllers 00:23:19.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:19.672 Initialization complete. Launching workers. 00:23:19.672 ======================================================== 00:23:19.672 Latency(us) 00:23:19.672 Device Information : IOPS MiB/s Average min max 00:23:19.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.09 0.02 22685.87 233.00 48659.06 00:23:19.672 ======================================================== 00:23:19.672 Total : 44.09 0.02 22685.87 233.00 48659.06 00:23:19.672 00:23:19.672 01:31:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:19.672 01:31:09 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:19.672 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.645 Initializing NVMe Controllers 00:23:29.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.645 Initialization complete. Launching workers. 00:23:29.645 ======================================================== 00:23:29.645 Latency(us) 00:23:29.645 Device Information : IOPS MiB/s Average min max 00:23:29.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.20 9.02 13915.49 5002.40 50877.60 00:23:29.645 ======================================================== 00:23:29.645 Total : 72.20 9.02 13915.49 5002.40 50877.60 00:23:29.645 00:23:29.645 01:31:19 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:29.645 01:31:19 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:29.645 01:31:19 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:29.645 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.629 Initializing NVMe Controllers 00:23:39.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:39.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:39.629 Initialization complete. Launching workers. 00:23:39.629 ======================================================== 00:23:39.629 Latency(us) 00:23:39.629 Device Information : IOPS MiB/s Average min max 00:23:39.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7190.70 3.51 4451.33 321.78 12043.38 00:23:39.629 ======================================================== 00:23:39.629 Total : 7190.70 3.51 4451.33 321.78 12043.38 00:23:39.629 00:23:39.629 01:31:30 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:39.629 01:31:30 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:39.629 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.602 Initializing NVMe Controllers 00:23:49.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.602 Initialization complete. Launching workers. 00:23:49.602 ======================================================== 00:23:49.602 Latency(us) 00:23:49.602 Device Information : IOPS MiB/s Average min max 00:23:49.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1767.64 220.95 18108.86 1614.08 55835.02 00:23:49.602 ======================================================== 00:23:49.602 Total : 1767.64 220.95 18108.86 1614.08 55835.02 00:23:49.602 00:23:49.602 01:31:40 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:49.602 01:31:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:49.602 01:31:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:49.602 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.579 Initializing NVMe Controllers 00:23:59.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.579 Controller IO queue size 128, less than required. 00:23:59.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:59.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:59.579 Initialization complete. Launching workers. 00:23:59.579 ======================================================== 00:23:59.579 Latency(us) 00:23:59.579 Device Information : IOPS MiB/s Average min max 00:23:59.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11979.70 5.85 10691.31 1722.54 24890.09 00:23:59.579 ======================================================== 00:23:59.579 Total : 11979.70 5.85 10691.31 1722.54 24890.09 00:23:59.579 00:23:59.579 01:31:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:59.579 01:31:50 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:59.579 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.591 Initializing NVMe Controllers 00:24:09.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.591 Controller IO queue size 128, less than required. 00:24:09.591 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.591 Initialization complete. Launching workers. 00:24:09.591 ======================================================== 00:24:09.591 Latency(us) 00:24:09.591 Device Information : IOPS MiB/s Average min max 00:24:09.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1225.93 153.24 104566.22 24064.34 216359.86 00:24:09.591 ======================================================== 00:24:09.591 Total : 1225.93 153.24 104566.22 24064.34 216359.86 00:24:09.591 00:24:09.591 01:32:01 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.850 01:32:01 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3ca78100-1543-4d54-b5aa-097383d2a928 00:24:10.785 01:32:02 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:11.043 01:32:02 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7c0019cd-dfb9-4c66-95d6-c7c434beb3a3 00:24:11.306 01:32:02 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:11.566 01:32:03 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:11.566 01:32:03 -- host/perf.sh@114 -- # nvmftestfini 00:24:11.566 01:32:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:11.566 01:32:03 -- nvmf/common.sh@116 -- # sync 00:24:11.566 01:32:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:11.566 01:32:03 -- nvmf/common.sh@119 -- # set +e 00:24:11.566 01:32:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:11.566 01:32:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:11.566 rmmod nvme_tcp 00:24:11.566 rmmod nvme_fabrics 00:24:11.566 rmmod nvme_keyring 00:24:11.566 01:32:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:11.566 01:32:03 -- nvmf/common.sh@123 -- # set -e 00:24:11.566 01:32:03 -- nvmf/common.sh@124 -- # return 0 00:24:11.566 01:32:03 -- nvmf/common.sh@477 -- # '[' -n 711128 ']' 00:24:11.566 01:32:03 -- nvmf/common.sh@478 -- # killprocess 711128 00:24:11.566 01:32:03 -- common/autotest_common.sh@926 -- # '[' -z 711128 ']' 00:24:11.566 01:32:03 -- common/autotest_common.sh@930 -- # kill -0 711128 00:24:11.566 01:32:03 -- common/autotest_common.sh@931 -- # uname 00:24:11.566 01:32:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:11.566 01:32:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 711128 00:24:11.566 01:32:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:11.566 01:32:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:11.566 01:32:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 711128' 00:24:11.566 killing process with pid 711128 00:24:11.566 01:32:03 -- common/autotest_common.sh@945 -- # kill 711128 00:24:11.566 01:32:03 -- common/autotest_common.sh@950 -- # wait 711128 00:24:13.472 01:32:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:13.472 01:32:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:13.472 01:32:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:13.472 01:32:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.472 01:32:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:13.472 01:32:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.472 01:32:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.472 01:32:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.377 01:32:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:15.377 00:24:15.377 real 1m32.236s 00:24:15.377 user 5m34.638s 00:24:15.377 sys 0m15.850s 00:24:15.377 01:32:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.377 01:32:06 -- common/autotest_common.sh@10 -- # set +x 00:24:15.377 ************************************ 00:24:15.377 END TEST nvmf_perf 00:24:15.377 ************************************ 00:24:15.377 01:32:06 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:15.377 01:32:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:15.377 01:32:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:15.377 01:32:06 -- common/autotest_common.sh@10 -- # set +x 00:24:15.377 ************************************ 00:24:15.377 START TEST nvmf_fio_host 00:24:15.377 ************************************ 00:24:15.377 01:32:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:15.377 * Looking for test storage... 00:24:15.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.377 01:32:06 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.377 01:32:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.377 01:32:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.377 01:32:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.377 01:32:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.377 01:32:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.377 01:32:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.377 01:32:06 -- paths/export.sh@5 -- # export PATH 00:24:15.377 01:32:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.377 01:32:06 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.377 01:32:06 -- nvmf/common.sh@7 -- # uname -s 00:24:15.377 01:32:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.377 01:32:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.377 01:32:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.377 01:32:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.377 01:32:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.377 01:32:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.377 01:32:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.377 01:32:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.378 01:32:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.378 01:32:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.378 01:32:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:15.378 01:32:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:15.378 01:32:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.378 01:32:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.378 01:32:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.378 01:32:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.378 01:32:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.378 01:32:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.378 01:32:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.378 01:32:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.378 01:32:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.378 01:32:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.378 01:32:06 -- paths/export.sh@5 -- # export PATH 00:24:15.378 01:32:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.378 01:32:06 -- nvmf/common.sh@46 -- # : 0 00:24:15.378 01:32:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:15.378 01:32:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:15.378 01:32:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:15.378 01:32:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.378 01:32:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.378 01:32:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:15.378 01:32:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:15.378 01:32:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:15.378 01:32:06 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:15.378 01:32:06 -- host/fio.sh@14 -- # nvmftestinit 00:24:15.378 01:32:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:15.378 01:32:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.378 01:32:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:15.378 01:32:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:15.378 01:32:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:15.378 01:32:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.378 01:32:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.378 01:32:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.378 01:32:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:15.378 01:32:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:15.378 01:32:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:15.378 01:32:06 -- common/autotest_common.sh@10 -- # set +x 00:24:17.282 01:32:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:17.282 01:32:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:17.282 01:32:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:17.282 01:32:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:17.282 01:32:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:17.282 01:32:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:17.282 01:32:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:17.282 01:32:08 -- nvmf/common.sh@294 -- # net_devs=() 00:24:17.282 01:32:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:17.282 01:32:08 -- nvmf/common.sh@295 -- # e810=() 00:24:17.282 01:32:08 -- nvmf/common.sh@295 -- # local -ga e810 00:24:17.282 01:32:08 -- nvmf/common.sh@296 -- # x722=() 00:24:17.282 01:32:08 -- nvmf/common.sh@296 -- # local -ga x722 00:24:17.282 01:32:08 -- nvmf/common.sh@297 -- # mlx=() 00:24:17.282 01:32:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:17.282 01:32:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.282 01:32:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:17.282 01:32:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:17.282 01:32:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:17.282 01:32:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:17.282 01:32:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:17.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:17.282 01:32:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:17.282 01:32:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:17.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:17.282 01:32:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:17.282 01:32:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:17.282 01:32:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:17.282 01:32:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.282 01:32:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:17.282 01:32:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.283 01:32:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:17.283 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:17.283 01:32:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.283 01:32:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:17.283 01:32:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.283 01:32:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:17.283 01:32:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.283 01:32:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:17.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:17.283 01:32:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.283 01:32:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:17.283 01:32:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:17.283 01:32:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:17.283 01:32:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:17.283 01:32:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:17.283 01:32:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.283 01:32:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.283 01:32:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.283 01:32:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:17.283 01:32:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.283 01:32:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.283 01:32:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:17.283 01:32:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.283 01:32:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.283 01:32:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:17.283 01:32:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:17.283 01:32:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.283 01:32:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.283 01:32:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.283 01:32:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.283 01:32:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:17.283 01:32:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.283 01:32:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.283 01:32:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.283 01:32:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:17.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:24:17.283 00:24:17.283 --- 10.0.0.2 ping statistics --- 00:24:17.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.283 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:24:17.283 01:32:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:24:17.283 00:24:17.283 --- 10.0.0.1 ping statistics --- 00:24:17.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.283 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:17.283 01:32:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.283 01:32:09 -- nvmf/common.sh@410 -- # return 0 00:24:17.283 01:32:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:17.283 01:32:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.283 01:32:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:17.283 01:32:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:17.283 01:32:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.283 01:32:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:17.283 01:32:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:17.542 01:32:09 -- host/fio.sh@16 -- # [[ y != y ]] 00:24:17.542 01:32:09 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:17.542 01:32:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:17.542 01:32:09 -- common/autotest_common.sh@10 -- # set +x 00:24:17.542 01:32:09 -- host/fio.sh@24 -- # nvmfpid=723568 00:24:17.542 01:32:09 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:17.542 01:32:09 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.542 01:32:09 -- host/fio.sh@28 -- # waitforlisten 723568 00:24:17.542 01:32:09 -- common/autotest_common.sh@819 -- # '[' -z 723568 ']' 00:24:17.542 01:32:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.542 01:32:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:17.542 01:32:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.542 01:32:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:17.542 01:32:09 -- common/autotest_common.sh@10 -- # set +x 00:24:17.542 [2024-07-27 01:32:09.097150] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:17.542 [2024-07-27 01:32:09.097238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.542 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.542 [2024-07-27 01:32:09.164685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.542 [2024-07-27 01:32:09.282243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:17.542 [2024-07-27 01:32:09.282411] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.542 [2024-07-27 01:32:09.282427] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.542 [2024-07-27 01:32:09.282440] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.542 [2024-07-27 01:32:09.282500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.542 [2024-07-27 01:32:09.282535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.542 [2024-07-27 01:32:09.282583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.542 [2024-07-27 01:32:09.282586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.478 01:32:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:18.478 01:32:10 -- common/autotest_common.sh@852 -- # return 0 00:24:18.478 01:32:10 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:18.736 [2024-07-27 01:32:10.260171] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.736 01:32:10 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:18.736 01:32:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:18.736 01:32:10 -- common/autotest_common.sh@10 -- # set +x 00:24:18.736 01:32:10 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:18.994 Malloc1 00:24:18.995 01:32:10 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.252 01:32:10 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:19.509 01:32:11 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.767 [2024-07-27 01:32:11.310974] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.767 01:32:11 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:20.025 01:32:11 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:20.025 01:32:11 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:20.025 01:32:11 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:20.025 01:32:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:20.025 01:32:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:20.025 01:32:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:20.025 01:32:11 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:20.025 01:32:11 -- common/autotest_common.sh@1320 -- # shift 00:24:20.025 01:32:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:20.025 01:32:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:20.025 01:32:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:20.025 01:32:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:20.025 01:32:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:20.025 01:32:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:20.025 01:32:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:20.025 01:32:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:20.025 01:32:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:20.025 01:32:11 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:20.025 01:32:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:20.025 01:32:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:20.025 01:32:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:20.025 01:32:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:20.025 01:32:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:20.285 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:20.285 fio-3.35 00:24:20.285 Starting 1 thread 00:24:20.285 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.821 00:24:22.821 test: (groupid=0, jobs=1): err= 0: pid=724070: Sat Jul 27 01:32:14 2024 00:24:22.821 read: IOPS=9365, BW=36.6MiB/s (38.4MB/s)(73.4MiB/2006msec) 00:24:22.821 slat (nsec): min=1969, max=157228, avg=2455.15, stdev=1797.08 00:24:22.821 clat (usec): min=3864, max=12828, avg=7573.71, stdev=582.13 00:24:22.821 lat (usec): min=3889, max=12831, avg=7576.16, stdev=582.07 00:24:22.821 clat percentiles (usec): 00:24:22.821 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:24:22.821 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:24:22.821 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:24:22.821 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11207], 99.95th=[11731], 00:24:22.821 | 99.99th=[12780] 00:24:22.821 bw ( KiB/s): min=35936, max=38088, per=99.92%, avg=37430.00, stdev=1010.06, samples=4 00:24:22.821 iops : min= 8984, max= 9522, avg=9357.50, stdev=252.51, samples=4 00:24:22.821 write: IOPS=9368, BW=36.6MiB/s (38.4MB/s)(73.4MiB/2006msec); 0 zone resets 00:24:22.821 slat (usec): min=2, max=132, avg= 2.53, stdev= 1.33 00:24:22.821 clat (usec): min=1519, max=11622, avg=6059.41, stdev=518.24 00:24:22.821 lat (usec): min=1528, max=11624, avg=6061.94, stdev=518.24 00:24:22.821 clat percentiles (usec): 00:24:22.821 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:24:22.821 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:24:22.821 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:24:22.821 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[10290], 99.95th=[10945], 00:24:22.821 | 99.99th=[11600] 00:24:22.821 bw ( KiB/s): min=36760, max=38272, per=100.00%, avg=37478.00, stdev=638.38, samples=4 00:24:22.821 iops : min= 9190, max= 9568, avg=9369.50, stdev=159.59, samples=4 00:24:22.821 lat (msec) : 2=0.01%, 4=0.08%, 10=99.77%, 20=0.14% 00:24:22.821 cpu : usr=52.82%, sys=39.10%, ctx=71, majf=0, minf=5 00:24:22.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:22.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:22.821 issued rwts: total=18787,18794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:22.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:22.821 00:24:22.821 Run status group 0 (all jobs): 00:24:22.821 READ: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=73.4MiB (77.0MB), run=2006-2006msec 00:24:22.822 WRITE: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=73.4MiB (77.0MB), run=2006-2006msec 00:24:22.822 01:32:14 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:22.822 01:32:14 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:22.822 01:32:14 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:22.822 01:32:14 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:22.822 01:32:14 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:22.822 01:32:14 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.822 01:32:14 -- common/autotest_common.sh@1320 -- # shift 00:24:22.822 01:32:14 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:22.822 01:32:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.822 01:32:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.822 01:32:14 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:22.822 01:32:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:22.822 01:32:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:22.822 01:32:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:22.822 01:32:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.822 01:32:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.822 01:32:14 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:22.822 01:32:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:22.822 01:32:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:22.822 01:32:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:22.822 01:32:14 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:22.822 01:32:14 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:22.822 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:22.822 fio-3.35 00:24:22.822 Starting 1 thread 00:24:22.822 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.385 00:24:25.385 test: (groupid=0, jobs=1): err= 0: pid=724414: Sat Jul 27 01:32:16 2024 00:24:25.385 read: IOPS=8275, BW=129MiB/s (136MB/s)(260MiB/2011msec) 00:24:25.385 slat (nsec): min=2841, max=95937, avg=3659.29, stdev=1644.78 00:24:25.385 clat (usec): min=3258, max=18042, avg=9343.16, stdev=2312.98 00:24:25.385 lat (usec): min=3262, max=18046, avg=9346.82, stdev=2313.10 00:24:25.385 clat percentiles (usec): 00:24:25.385 | 1.00th=[ 4948], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 7373], 00:24:25.385 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:24:25.385 | 70.00th=[10421], 80.00th=[11207], 90.00th=[12518], 95.00th=[13566], 00:24:25.385 | 99.00th=[15401], 99.50th=[15926], 99.90th=[16581], 99.95th=[16909], 00:24:25.385 | 99.99th=[17433] 00:24:25.385 bw ( KiB/s): min=61504, max=77216, per=52.23%, avg=69160.00, stdev=6525.35, samples=4 00:24:25.385 iops : min= 3844, max= 4826, avg=4322.50, stdev=407.83, samples=4 00:24:25.385 write: IOPS=4729, BW=73.9MiB/s (77.5MB/s)(141MiB/1905msec); 0 zone resets 00:24:25.385 slat (usec): min=30, max=134, avg=33.43, stdev= 4.85 00:24:25.385 clat (usec): min=4322, max=18239, avg=10808.32, stdev=1738.34 00:24:25.385 lat (usec): min=4353, max=18271, avg=10841.75, stdev=1738.63 00:24:25.385 clat percentiles (usec): 00:24:25.385 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9372], 00:24:25.385 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:24:25.385 | 70.00th=[11469], 80.00th=[12125], 90.00th=[13042], 95.00th=[13960], 00:24:25.385 | 99.00th=[15926], 99.50th=[16188], 99.90th=[17433], 99.95th=[17433], 00:24:25.385 | 99.99th=[18220] 00:24:25.385 bw ( KiB/s): min=64064, max=80352, per=94.98%, avg=71864.00, stdev=6820.20, samples=4 00:24:25.385 iops : min= 4004, max= 5022, avg=4491.50, stdev=426.26, samples=4 00:24:25.385 lat (msec) : 4=0.07%, 10=53.63%, 20=46.29% 00:24:25.385 cpu : usr=76.82%, sys=20.25%, ctx=19, majf=0, minf=1 00:24:25.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:25.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:25.385 issued rwts: total=16643,9009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:25.385 00:24:25.385 Run status group 0 (all jobs): 00:24:25.385 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=260MiB (273MB), run=2011-2011msec 00:24:25.385 WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=141MiB (148MB), run=1905-1905msec 00:24:25.385 01:32:16 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.385 01:32:16 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:24:25.385 01:32:16 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:24:25.385 01:32:16 -- host/fio.sh@51 -- # get_nvme_bdfs 00:24:25.386 01:32:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:25.386 01:32:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:24:25.386 01:32:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:25.386 01:32:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:25.386 01:32:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:25.386 01:32:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:24:25.386 01:32:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:24:25.386 01:32:16 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:24:28.678 Nvme0n1 00:24:28.678 01:32:20 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:31.210 01:32:22 -- host/fio.sh@53 -- # ls_guid=045349ca-9d00-4692-9e09-5056a52a7368 00:24:31.210 01:32:22 -- host/fio.sh@54 -- # get_lvs_free_mb 045349ca-9d00-4692-9e09-5056a52a7368 00:24:31.210 01:32:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=045349ca-9d00-4692-9e09-5056a52a7368 00:24:31.210 01:32:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:31.210 01:32:22 -- common/autotest_common.sh@1345 -- # local fc 00:24:31.210 01:32:22 -- common/autotest_common.sh@1346 -- # local cs 00:24:31.210 01:32:22 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:31.468 01:32:23 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:31.468 { 00:24:31.468 "uuid": "045349ca-9d00-4692-9e09-5056a52a7368", 00:24:31.468 "name": "lvs_0", 00:24:31.468 "base_bdev": "Nvme0n1", 00:24:31.468 "total_data_clusters": 930, 00:24:31.468 "free_clusters": 930, 00:24:31.468 "block_size": 512, 00:24:31.468 "cluster_size": 1073741824 00:24:31.468 } 00:24:31.468 ]' 00:24:31.468 01:32:23 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="045349ca-9d00-4692-9e09-5056a52a7368") .free_clusters' 00:24:31.468 01:32:23 -- common/autotest_common.sh@1348 -- # fc=930 00:24:31.468 01:32:23 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="045349ca-9d00-4692-9e09-5056a52a7368") .cluster_size' 00:24:31.468 01:32:23 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:24:31.468 01:32:23 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:24:31.468 01:32:23 -- common/autotest_common.sh@1353 -- # echo 952320 00:24:31.468 952320 00:24:31.468 01:32:23 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:24:32.033 348e5fc1-faee-42c1-90ba-7e6f2314be60 00:24:32.033 01:32:23 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:32.291 01:32:23 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:32.548 01:32:24 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:32.806 01:32:24 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.806 01:32:24 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.806 01:32:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:32.806 01:32:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:32.806 01:32:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:32.806 01:32:24 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.806 01:32:24 -- common/autotest_common.sh@1320 -- # shift 00:24:32.806 01:32:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:32.806 01:32:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.806 01:32:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.806 01:32:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:32.806 01:32:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:32.806 01:32:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:32.806 01:32:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:32.806 01:32:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.806 01:32:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.806 01:32:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:32.806 01:32:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:32.806 01:32:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:32.806 01:32:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:32.806 01:32:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:32.806 01:32:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:33.068 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:33.068 fio-3.35 00:24:33.068 Starting 1 thread 00:24:33.068 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.590 00:24:35.590 test: (groupid=0, jobs=1): err= 0: pid=725748: Sat Jul 27 01:32:26 2024 00:24:35.590 read: IOPS=6187, BW=24.2MiB/s (25.3MB/s)(48.5MiB/2008msec) 00:24:35.590 slat (nsec): min=1993, max=117561, avg=2522.34, stdev=1945.80 00:24:35.590 clat (usec): min=945, max=171446, avg=11422.01, stdev=11522.44 00:24:35.590 lat (usec): min=948, max=171483, avg=11424.54, stdev=11522.65 00:24:35.590 clat percentiles (msec): 00:24:35.590 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:24:35.590 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:24:35.590 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:24:35.590 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:24:35.590 | 99.99th=[ 171] 00:24:35.590 bw ( KiB/s): min=17168, max=27536, per=99.87%, avg=24718.00, stdev=5037.84, samples=4 00:24:35.590 iops : min= 4292, max= 6884, avg=6179.50, stdev=1259.46, samples=4 00:24:35.590 write: IOPS=6172, BW=24.1MiB/s (25.3MB/s)(48.4MiB/2008msec); 0 zone resets 00:24:35.590 slat (nsec): min=2123, max=99968, avg=2626.55, stdev=1504.29 00:24:35.590 clat (usec): min=294, max=168868, avg=9139.22, stdev=10782.14 00:24:35.590 lat (usec): min=297, max=168873, avg=9141.85, stdev=10782.34 00:24:35.590 clat percentiles (msec): 00:24:35.590 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:24:35.590 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:24:35.590 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:24:35.590 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:24:35.590 | 99.99th=[ 169] 00:24:35.590 bw ( KiB/s): min=18216, max=26944, per=99.91%, avg=24668.00, stdev=4302.53, samples=4 00:24:35.590 iops : min= 4554, max= 6736, avg=6167.00, stdev=1075.63, samples=4 00:24:35.590 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:24:35.590 lat (msec) : 2=0.03%, 4=0.11%, 10=60.69%, 20=38.63%, 250=0.52% 00:24:35.590 cpu : usr=52.67%, sys=42.55%, ctx=53, majf=0, minf=5 00:24:35.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:35.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:35.590 issued rwts: total=12425,12395,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:35.590 00:24:35.590 Run status group 0 (all jobs): 00:24:35.591 READ: bw=24.2MiB/s (25.3MB/s), 24.2MiB/s-24.2MiB/s (25.3MB/s-25.3MB/s), io=48.5MiB (50.9MB), run=2008-2008msec 00:24:35.591 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=48.4MiB (50.8MB), run=2008-2008msec 00:24:35.591 01:32:26 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:35.591 01:32:27 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:36.959 01:32:28 -- host/fio.sh@64 -- # ls_nested_guid=2ca89dcf-05d5-4977-aa3e-7041cd2e91f5 00:24:36.959 01:32:28 -- host/fio.sh@65 -- # get_lvs_free_mb 2ca89dcf-05d5-4977-aa3e-7041cd2e91f5 00:24:36.959 01:32:28 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2ca89dcf-05d5-4977-aa3e-7041cd2e91f5 00:24:36.959 01:32:28 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:36.959 01:32:28 -- common/autotest_common.sh@1345 -- # local fc 00:24:36.959 01:32:28 -- common/autotest_common.sh@1346 -- # local cs 00:24:36.959 01:32:28 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:36.959 01:32:28 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:36.959 { 00:24:36.959 "uuid": "045349ca-9d00-4692-9e09-5056a52a7368", 00:24:36.959 "name": "lvs_0", 00:24:36.959 "base_bdev": "Nvme0n1", 00:24:36.959 "total_data_clusters": 930, 00:24:36.959 "free_clusters": 0, 00:24:36.959 "block_size": 512, 00:24:36.959 "cluster_size": 1073741824 00:24:36.959 }, 00:24:36.959 { 00:24:36.959 "uuid": "2ca89dcf-05d5-4977-aa3e-7041cd2e91f5", 00:24:36.959 "name": "lvs_n_0", 00:24:36.959 "base_bdev": "348e5fc1-faee-42c1-90ba-7e6f2314be60", 00:24:36.959 "total_data_clusters": 237847, 00:24:36.959 "free_clusters": 237847, 00:24:36.959 "block_size": 512, 00:24:36.959 "cluster_size": 4194304 00:24:36.959 } 00:24:36.959 ]' 00:24:36.959 01:32:28 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2ca89dcf-05d5-4977-aa3e-7041cd2e91f5") .free_clusters' 00:24:36.959 01:32:28 -- common/autotest_common.sh@1348 -- # fc=237847 00:24:36.959 01:32:28 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2ca89dcf-05d5-4977-aa3e-7041cd2e91f5") .cluster_size' 00:24:36.959 01:32:28 -- common/autotest_common.sh@1349 -- # cs=4194304 00:24:36.959 01:32:28 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:24:36.959 01:32:28 -- common/autotest_common.sh@1353 -- # echo 951388 00:24:36.959 951388 00:24:36.959 01:32:28 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:24:37.890 58ca3b4e-099f-45b8-acc1-d331b54cb4be 00:24:37.890 01:32:29 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:37.890 01:32:29 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:38.147 01:32:29 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:38.405 01:32:30 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:38.405 01:32:30 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:38.405 01:32:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:38.405 01:32:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:38.405 01:32:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:38.405 01:32:30 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:38.405 01:32:30 -- common/autotest_common.sh@1320 -- # shift 00:24:38.405 01:32:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:38.405 01:32:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:38.405 01:32:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:38.405 01:32:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:38.405 01:32:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:38.405 01:32:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:38.405 01:32:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:38.405 01:32:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:38.405 01:32:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:38.405 01:32:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:38.405 01:32:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:38.405 01:32:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:38.405 01:32:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:38.405 01:32:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:38.406 01:32:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:38.663 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:38.663 fio-3.35 00:24:38.663 Starting 1 thread 00:24:38.663 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.193 00:24:41.193 test: (groupid=0, jobs=1): err= 0: pid=726505: Sat Jul 27 01:32:32 2024 00:24:41.193 read: IOPS=6140, BW=24.0MiB/s (25.2MB/s)(48.2MiB/2008msec) 00:24:41.193 slat (usec): min=2, max=176, avg= 2.73, stdev= 2.46 00:24:41.193 clat (usec): min=4497, max=18593, avg=11522.85, stdev=939.38 00:24:41.193 lat (usec): min=4514, max=18596, avg=11525.58, stdev=939.28 00:24:41.193 clat percentiles (usec): 00:24:41.193 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:24:41.193 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:24:41.193 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:24:41.193 | 99.00th=[13566], 99.50th=[13829], 99.90th=[17433], 99.95th=[18482], 00:24:41.193 | 99.99th=[18482] 00:24:41.193 bw ( KiB/s): min=23248, max=25032, per=99.89%, avg=24536.00, stdev=860.46, samples=4 00:24:41.193 iops : min= 5812, max= 6258, avg=6134.00, stdev=215.12, samples=4 00:24:41.193 write: IOPS=6128, BW=23.9MiB/s (25.1MB/s)(48.1MiB/2008msec); 0 zone resets 00:24:41.193 slat (usec): min=2, max=135, avg= 2.82, stdev= 1.70 00:24:41.193 clat (usec): min=2525, max=17117, avg=9167.31, stdev=843.48 00:24:41.193 lat (usec): min=2534, max=17120, avg=9170.13, stdev=843.45 00:24:41.193 clat percentiles (usec): 00:24:41.193 | 1.00th=[ 7242], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8586], 00:24:41.193 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:24:41.193 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:24:41.193 | 99.00th=[11076], 99.50th=[11338], 99.90th=[14353], 99.95th=[15926], 00:24:41.193 | 99.99th=[17171] 00:24:41.193 bw ( KiB/s): min=24216, max=24640, per=99.88%, avg=24486.00, stdev=187.43, samples=4 00:24:41.193 iops : min= 6054, max= 6160, avg=6121.50, stdev=46.86, samples=4 00:24:41.193 lat (msec) : 4=0.04%, 10=45.13%, 20=54.83% 00:24:41.193 cpu : usr=56.05%, sys=38.76%, ctx=84, majf=0, minf=5 00:24:41.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:41.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:41.193 issued rwts: total=12331,12307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:41.193 00:24:41.193 Run status group 0 (all jobs): 00:24:41.193 READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.2MiB (50.5MB), run=2008-2008msec 00:24:41.193 WRITE: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.1MiB (50.4MB), run=2008-2008msec 00:24:41.193 01:32:32 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:41.193 01:32:32 -- host/fio.sh@74 -- # sync 00:24:41.193 01:32:32 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:45.422 01:32:36 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:45.422 01:32:36 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:24:48.702 01:32:39 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:48.702 01:32:40 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:50.607 01:32:42 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:50.607 01:32:42 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:50.607 01:32:42 -- host/fio.sh@86 -- # nvmftestfini 00:24:50.607 01:32:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:50.607 01:32:42 -- nvmf/common.sh@116 -- # sync 00:24:50.607 01:32:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:50.607 01:32:42 -- nvmf/common.sh@119 -- # set +e 00:24:50.607 01:32:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:50.607 01:32:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:50.607 rmmod nvme_tcp 00:24:50.607 rmmod nvme_fabrics 00:24:50.607 rmmod nvme_keyring 00:24:50.607 01:32:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:50.607 01:32:42 -- nvmf/common.sh@123 -- # set -e 00:24:50.607 01:32:42 -- nvmf/common.sh@124 -- # return 0 00:24:50.607 01:32:42 -- nvmf/common.sh@477 -- # '[' -n 723568 ']' 00:24:50.607 01:32:42 -- nvmf/common.sh@478 -- # killprocess 723568 00:24:50.607 01:32:42 -- common/autotest_common.sh@926 -- # '[' -z 723568 ']' 00:24:50.607 01:32:42 -- common/autotest_common.sh@930 -- # kill -0 723568 00:24:50.607 01:32:42 -- common/autotest_common.sh@931 -- # uname 00:24:50.607 01:32:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:50.607 01:32:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 723568 00:24:50.607 01:32:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:50.607 01:32:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:50.607 01:32:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 723568' 00:24:50.607 killing process with pid 723568 00:24:50.607 01:32:42 -- common/autotest_common.sh@945 -- # kill 723568 00:24:50.607 01:32:42 -- common/autotest_common.sh@950 -- # wait 723568 00:24:50.866 01:32:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:50.866 01:32:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:50.866 01:32:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:50.866 01:32:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.866 01:32:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:50.866 01:32:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.866 01:32:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.866 01:32:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.771 01:32:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:52.771 00:24:52.771 real 0m37.511s 00:24:52.771 user 2m23.035s 00:24:52.771 sys 0m7.335s 00:24:52.771 01:32:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:52.771 01:32:44 -- common/autotest_common.sh@10 -- # set +x 00:24:52.771 ************************************ 00:24:52.771 END TEST nvmf_fio_host 00:24:52.771 ************************************ 00:24:52.771 01:32:44 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:52.771 01:32:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:52.771 01:32:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:52.771 01:32:44 -- common/autotest_common.sh@10 -- # set +x 00:24:52.771 ************************************ 00:24:52.771 START TEST nvmf_failover 00:24:52.771 ************************************ 00:24:52.771 01:32:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:52.771 * Looking for test storage... 00:24:52.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.771 01:32:44 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.771 01:32:44 -- nvmf/common.sh@7 -- # uname -s 00:24:52.771 01:32:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.771 01:32:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.771 01:32:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.771 01:32:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.771 01:32:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.771 01:32:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.771 01:32:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.771 01:32:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.771 01:32:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.771 01:32:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.771 01:32:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.771 01:32:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.771 01:32:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.771 01:32:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.771 01:32:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.771 01:32:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.771 01:32:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.771 01:32:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.771 01:32:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.771 01:32:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.771 01:32:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.771 01:32:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.771 01:32:44 -- paths/export.sh@5 -- # export PATH 00:24:52.771 01:32:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.771 01:32:44 -- nvmf/common.sh@46 -- # : 0 00:24:52.771 01:32:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:52.771 01:32:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:52.771 01:32:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:52.771 01:32:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.771 01:32:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.771 01:32:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:52.771 01:32:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:52.772 01:32:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:52.772 01:32:44 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:52.772 01:32:44 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:52.772 01:32:44 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:52.772 01:32:44 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:52.772 01:32:44 -- host/failover.sh@18 -- # nvmftestinit 00:24:52.772 01:32:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:52.772 01:32:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.772 01:32:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:52.772 01:32:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:52.772 01:32:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:52.772 01:32:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.772 01:32:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.772 01:32:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.772 01:32:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:52.772 01:32:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:52.772 01:32:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:52.772 01:32:44 -- common/autotest_common.sh@10 -- # set +x 00:24:55.302 01:32:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:55.302 01:32:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:55.302 01:32:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:55.302 01:32:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:55.302 01:32:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:55.302 01:32:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:55.302 01:32:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:55.302 01:32:46 -- nvmf/common.sh@294 -- # net_devs=() 00:24:55.302 01:32:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:55.302 01:32:46 -- nvmf/common.sh@295 -- # e810=() 00:24:55.302 01:32:46 -- nvmf/common.sh@295 -- # local -ga e810 00:24:55.302 01:32:46 -- nvmf/common.sh@296 -- # x722=() 00:24:55.302 01:32:46 -- nvmf/common.sh@296 -- # local -ga x722 00:24:55.302 01:32:46 -- nvmf/common.sh@297 -- # mlx=() 00:24:55.302 01:32:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:55.302 01:32:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.302 01:32:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:55.302 01:32:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:55.302 01:32:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:55.302 01:32:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:55.302 01:32:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:55.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:55.302 01:32:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:55.302 01:32:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:55.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:55.302 01:32:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:55.302 01:32:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:55.302 01:32:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.302 01:32:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:55.302 01:32:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.302 01:32:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:55.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:55.302 01:32:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.302 01:32:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:55.302 01:32:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.302 01:32:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:55.302 01:32:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.302 01:32:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:55.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:55.302 01:32:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.302 01:32:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:55.302 01:32:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:55.302 01:32:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:55.302 01:32:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:55.302 01:32:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.302 01:32:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.302 01:32:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.302 01:32:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:55.303 01:32:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.303 01:32:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.303 01:32:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:55.303 01:32:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.303 01:32:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.303 01:32:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:55.303 01:32:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:55.303 01:32:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.303 01:32:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.303 01:32:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.303 01:32:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.303 01:32:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:55.303 01:32:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.303 01:32:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.303 01:32:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.303 01:32:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:55.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:24:55.303 00:24:55.303 --- 10.0.0.2 ping statistics --- 00:24:55.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.303 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:24:55.303 01:32:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:24:55.303 00:24:55.303 --- 10.0.0.1 ping statistics --- 00:24:55.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.303 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:55.303 01:32:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.303 01:32:46 -- nvmf/common.sh@410 -- # return 0 00:24:55.303 01:32:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:55.303 01:32:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.303 01:32:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:55.303 01:32:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:55.303 01:32:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.303 01:32:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:55.303 01:32:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:55.303 01:32:46 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:55.303 01:32:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:55.303 01:32:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:55.303 01:32:46 -- common/autotest_common.sh@10 -- # set +x 00:24:55.303 01:32:46 -- nvmf/common.sh@469 -- # nvmfpid=729922 00:24:55.303 01:32:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:55.303 01:32:46 -- nvmf/common.sh@470 -- # waitforlisten 729922 00:24:55.303 01:32:46 -- common/autotest_common.sh@819 -- # '[' -z 729922 ']' 00:24:55.303 01:32:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.303 01:32:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:55.303 01:32:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.303 01:32:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:55.303 01:32:46 -- common/autotest_common.sh@10 -- # set +x 00:24:55.303 [2024-07-27 01:32:46.651514] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:55.303 [2024-07-27 01:32:46.651610] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.303 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.303 [2024-07-27 01:32:46.720825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:55.303 [2024-07-27 01:32:46.834616] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:55.303 [2024-07-27 01:32:46.834809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.303 [2024-07-27 01:32:46.834830] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.303 [2024-07-27 01:32:46.834845] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.303 [2024-07-27 01:32:46.834948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.303 [2024-07-27 01:32:46.835046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.303 [2024-07-27 01:32:46.835049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.869 01:32:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:55.869 01:32:47 -- common/autotest_common.sh@852 -- # return 0 00:24:55.869 01:32:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:55.869 01:32:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.869 01:32:47 -- common/autotest_common.sh@10 -- # set +x 00:24:55.869 01:32:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.869 01:32:47 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:56.127 [2024-07-27 01:32:47.827356] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.127 01:32:47 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:56.385 Malloc0 00:24:56.385 01:32:48 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.642 01:32:48 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:56.900 01:32:48 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.158 [2024-07-27 01:32:48.821645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.158 01:32:48 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:57.415 [2024-07-27 01:32:49.058359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:57.415 01:32:49 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:57.673 [2024-07-27 01:32:49.303224] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:57.673 01:32:49 -- host/failover.sh@31 -- # bdevperf_pid=730230 00:24:57.673 01:32:49 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:57.673 01:32:49 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:57.673 01:32:49 -- host/failover.sh@34 -- # waitforlisten 730230 /var/tmp/bdevperf.sock 00:24:57.673 01:32:49 -- common/autotest_common.sh@819 -- # '[' -z 730230 ']' 00:24:57.673 01:32:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.673 01:32:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:57.673 01:32:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.673 01:32:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:57.673 01:32:49 -- common/autotest_common.sh@10 -- # set +x 00:24:58.608 01:32:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:58.608 01:32:50 -- common/autotest_common.sh@852 -- # return 0 00:24:58.608 01:32:50 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.177 NVMe0n1 00:24:59.177 01:32:50 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.435 00:24:59.435 01:32:51 -- host/failover.sh@39 -- # run_test_pid=730499 00:24:59.435 01:32:51 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.435 01:32:51 -- host/failover.sh@41 -- # sleep 1 00:25:00.371 01:32:52 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.631 [2024-07-27 01:32:52.284867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.284984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.284999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.631 [2024-07-27 01:32:52.285303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 [2024-07-27 01:32:52.285663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690370 is same with the state(5) to be set 00:25:00.632 01:32:52 -- host/failover.sh@45 -- # sleep 3 00:25:03.914 01:32:55 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.914 00:25:03.915 01:32:55 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:04.480 [2024-07-27 01:32:55.934388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 [2024-07-27 01:32:55.934597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2690b80 is same with the state(5) to be set 00:25:04.480 01:32:55 -- host/failover.sh@50 -- # sleep 3 00:25:07.795 01:32:58 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.795 [2024-07-27 01:32:59.225244] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.795 01:32:59 -- host/failover.sh@55 -- # sleep 1 00:25:08.731 01:33:00 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:08.731 [2024-07-27 01:33:00.480608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.480989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.731 [2024-07-27 01:33:00.481413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b2f0 is same with the state(5) to be set 00:25:08.990 01:33:00 -- host/failover.sh@59 -- # wait 730499 00:25:15.560 0 00:25:15.560 01:33:06 -- host/failover.sh@61 -- # killprocess 730230 00:25:15.560 01:33:06 -- common/autotest_common.sh@926 -- # '[' -z 730230 ']' 00:25:15.560 01:33:06 -- common/autotest_common.sh@930 -- # kill -0 730230 00:25:15.560 01:33:06 -- common/autotest_common.sh@931 -- # uname 00:25:15.560 01:33:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:15.560 01:33:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 730230 00:25:15.560 01:33:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:15.560 01:33:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:15.560 01:33:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 730230' 00:25:15.560 killing process with pid 730230 00:25:15.560 01:33:06 -- common/autotest_common.sh@945 -- # kill 730230 00:25:15.560 01:33:06 -- common/autotest_common.sh@950 -- # wait 730230 00:25:15.560 01:33:06 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:15.560 [2024-07-27 01:32:49.356690] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:15.560 [2024-07-27 01:32:49.356783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid730230 ] 00:25:15.560 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.560 [2024-07-27 01:32:49.416771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.560 [2024-07-27 01:32:49.524323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.560 Running I/O for 15 seconds... 00:25:15.560 [2024-07-27 01:32:52.286057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.560 [2024-07-27 01:32:52.286866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.560 [2024-07-27 01:32:52.286880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.286895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.286909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.286923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.286937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.286951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.286965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.286979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.286993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.561 [2024-07-27 01:32:52.287478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.561 [2024-07-27 01:32:52.287535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.561 [2024-07-27 01:32:52.287876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.561 [2024-07-27 01:32:52.287904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.561 [2024-07-27 01:32:52.287947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.561 [2024-07-27 01:32:52.287960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.287978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.287992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.562 [2024-07-27 01:32:52.288222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.562 [2024-07-27 01:32:52.288282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.562 [2024-07-27 01:32:52.288355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.562 [2024-07-27 01:32:52.288419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.562 [2024-07-27 01:32:52.288539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.562 [2024-07-27 01:32:52.288686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.562 [2024-07-27 01:32:52.288714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.562 [2024-07-27 01:32:52.288742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.288978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.288993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.289006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.289022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.289036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.562 [2024-07-27 01:32:52.289074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.562 [2024-07-27 01:32:52.289089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.563 [2024-07-27 01:32:52.289125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.563 [2024-07-27 01:32:52.289155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.563 [2024-07-27 01:32:52.289190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.563 [2024-07-27 01:32:52.289338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.563 [2024-07-27 01:32:52.289413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.563 [2024-07-27 01:32:52.289476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.563 [2024-07-27 01:32:52.289506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.563 [2024-07-27 01:32:52.289790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.563 [2024-07-27 01:32:52.289820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.289985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.289999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.290014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.563 [2024-07-27 01:32:52.290028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.290067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0600 is same with the state(5) to be set 00:25:15.563 [2024-07-27 01:32:52.290088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.563 [2024-07-27 01:32:52.290100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.563 [2024-07-27 01:32:52.290112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117016 len:8 PRP1 0x0 PRP2 0x0 00:25:15.563 [2024-07-27 01:32:52.290131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.290201] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cf0600 was disconnected and freed. reset controller. 00:25:15.563 [2024-07-27 01:32:52.290231] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:15.563 [2024-07-27 01:32:52.290268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.563 [2024-07-27 01:32:52.290287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.290303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.563 [2024-07-27 01:32:52.290316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.563 [2024-07-27 01:32:52.290331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.564 [2024-07-27 01:32:52.290345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:52.290359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.564 [2024-07-27 01:32:52.290372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:52.290386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.564 [2024-07-27 01:32:52.292718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.564 [2024-07-27 01:32:52.292758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd1bd0 (9): Bad file descriptor 00:25:15.564 [2024-07-27 01:32:52.365272] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:15.564 [2024-07-27 01:32:55.932768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.564 [2024-07-27 01:32:55.932842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.932876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.564 [2024-07-27 01:32:55.932892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.932906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.564 [2024-07-27 01:32:55.932920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.932934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.564 [2024-07-27 01:32:55.932948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.932962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd1bd0 is same with the state(5) to be set 00:25:15.564 [2024-07-27 01:32:55.934740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.934765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.934791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.934806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.934822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.934836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.934851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.934865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.934880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.934893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.934908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.934922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.934937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.934950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.934966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.934979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.934994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.564 [2024-07-27 01:32:55.935240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.564 [2024-07-27 01:32:55.935269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.564 [2024-07-27 01:32:55.935358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.564 [2024-07-27 01:32:55.935546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.564 [2024-07-27 01:32:55.935574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.564 [2024-07-27 01:32:55.935589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.565 [2024-07-27 01:32:55.935860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.565 [2024-07-27 01:32:55.935888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.565 [2024-07-27 01:32:55.935945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.935973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.935988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.565 [2024-07-27 01:32:55.936352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.565 [2024-07-27 01:32:55.936440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.565 [2024-07-27 01:32:55.936468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.565 [2024-07-27 01:32:55.936524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.565 [2024-07-27 01:32:55.936538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.565 [2024-07-27 01:32:55.936552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.936580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.936618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.936648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.936676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.936704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.936732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.936760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.936788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.936816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.936844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.936873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.936902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.936930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.936957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.936975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.936989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.937128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.937157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.937186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.937244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.566 [2024-07-27 01:32:55.937550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.937578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.566 [2024-07-27 01:32:55.937606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.566 [2024-07-27 01:32:55.937628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.937642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.937671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.937698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.937727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.937758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.937787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.937822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.937850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.937878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.937906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.937934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.937962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.937976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.937990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.938213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.938272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.938302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.938338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.938443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.567 [2024-07-27 01:32:55.938472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.567 [2024-07-27 01:32:55.938684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.567 [2024-07-27 01:32:55.938698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf24b0 is same with the state(5) to be set 00:25:15.567 [2024-07-27 01:32:55.938715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.567 [2024-07-27 01:32:55.938726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.567 [2024-07-27 01:32:55.938739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19912 len:8 PRP1 0x0 PRP2 0x0 00:25:15.567 [2024-07-27 01:32:55.938751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:32:55.938821] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cf24b0 was disconnected and freed. reset controller. 00:25:15.568 [2024-07-27 01:32:55.938841] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:15.568 [2024-07-27 01:32:55.938858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.568 [2024-07-27 01:32:55.940956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.568 [2024-07-27 01:32:55.941001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd1bd0 (9): Bad file descriptor 00:25:15.568 [2024-07-27 01:32:56.055811] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:15.568 [2024-07-27 01:33:00.481296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.568 [2024-07-27 01:33:00.481349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.568 [2024-07-27 01:33:00.481383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.568 [2024-07-27 01:33:00.481412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.568 [2024-07-27 01:33:00.481446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd1bd0 is same with the state(5) to be set 00:25:15.568 [2024-07-27 01:33:00.481593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.481982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.481996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.568 [2024-07-27 01:33:00.482487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.568 [2024-07-27 01:33:00.482517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.568 [2024-07-27 01:33:00.482533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.569 [2024-07-27 01:33:00.482587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.569 [2024-07-27 01:33:00.482753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.569 [2024-07-27 01:33:00.482888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.482944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.482959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.569 [2024-07-27 01:33:00.482972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.483027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.483082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.483115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.483146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.483176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.483206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.483243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.569 [2024-07-27 01:33:00.483274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.569 [2024-07-27 01:33:00.483303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.569 [2024-07-27 01:33:00.483319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.569 [2024-07-27 01:33:00.483333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.570 [2024-07-27 01:33:00.483733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.570 [2024-07-27 01:33:00.483762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.570 [2024-07-27 01:33:00.483907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.570 [2024-07-27 01:33:00.483935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.483963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.483979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.570 [2024-07-27 01:33:00.484256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.570 [2024-07-27 01:33:00.484345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.570 [2024-07-27 01:33:00.484374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.570 [2024-07-27 01:33:00.484403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.570 [2024-07-27 01:33:00.484424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.570 [2024-07-27 01:33:00.484438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.484472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.484502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.484590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.484877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.484935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.484979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.484995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.485008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.485092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.485123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.485158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.485247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.485275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.485426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.485471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.571 [2024-07-27 01:33:00.485501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.571 [2024-07-27 01:33:00.485629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.571 [2024-07-27 01:33:00.485644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.572 [2024-07-27 01:33:00.485659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.572 [2024-07-27 01:33:00.485674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.572 [2024-07-27 01:33:00.485689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.572 [2024-07-27 01:33:00.485707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.572 [2024-07-27 01:33:00.485724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.572 [2024-07-27 01:33:00.485742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.572 [2024-07-27 01:33:00.485758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.572 [2024-07-27 01:33:00.485773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.572 [2024-07-27 01:33:00.485787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde050 is same with the state(5) to be set 00:25:15.572 [2024-07-27 01:33:00.485805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.572 [2024-07-27 01:33:00.485816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.572 [2024-07-27 01:33:00.485828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10456 len:8 PRP1 0x0 PRP2 0x0 00:25:15.572 [2024-07-27 01:33:00.485842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.572 [2024-07-27 01:33:00.485902] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cde050 was disconnected and freed. reset controller. 00:25:15.572 [2024-07-27 01:33:00.485921] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:15.572 [2024-07-27 01:33:00.485944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.572 [2024-07-27 01:33:00.488277] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.572 [2024-07-27 01:33:00.488319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd1bd0 (9): Bad file descriptor 00:25:15.572 [2024-07-27 01:33:00.558293] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:15.572 00:25:15.572 Latency(us) 00:25:15.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.572 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:15.572 Verification LBA range: start 0x0 length 0x4000 00:25:15.572 NVMe0n1 : 15.01 11958.66 46.71 1007.38 0.00 9855.50 758.52 18932.62 00:25:15.572 =================================================================================================================== 00:25:15.572 Total : 11958.66 46.71 1007.38 0.00 9855.50 758.52 18932.62 00:25:15.572 Received shutdown signal, test time was about 15.000000 seconds 00:25:15.572 00:25:15.572 Latency(us) 00:25:15.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.572 =================================================================================================================== 00:25:15.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.572 01:33:06 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:15.572 01:33:06 -- host/failover.sh@65 -- # count=3 00:25:15.572 01:33:06 -- host/failover.sh@67 -- # (( count != 3 )) 00:25:15.572 01:33:06 -- host/failover.sh@73 -- # bdevperf_pid=732384 00:25:15.572 01:33:06 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:15.572 01:33:06 -- host/failover.sh@75 -- # waitforlisten 732384 /var/tmp/bdevperf.sock 00:25:15.572 01:33:06 -- common/autotest_common.sh@819 -- # '[' -z 732384 ']' 00:25:15.572 01:33:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.572 01:33:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:15.572 01:33:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.572 01:33:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:15.572 01:33:06 -- common/autotest_common.sh@10 -- # set +x 00:25:15.831 01:33:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:15.831 01:33:07 -- common/autotest_common.sh@852 -- # return 0 00:25:15.831 01:33:07 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.089 [2024-07-27 01:33:07.773153] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.089 01:33:07 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.347 [2024-07-27 01:33:08.025876] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.347 01:33:08 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:16.913 NVMe0n1 00:25:16.913 01:33:08 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.479 00:25:17.479 01:33:08 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.737 00:25:17.737 01:33:09 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.737 01:33:09 -- host/failover.sh@82 -- # grep -q NVMe0 00:25:17.994 01:33:09 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.252 01:33:09 -- host/failover.sh@87 -- # sleep 3 00:25:21.538 01:33:12 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.538 01:33:12 -- host/failover.sh@88 -- # grep -q NVMe0 00:25:21.538 01:33:13 -- host/failover.sh@90 -- # run_test_pid=733716 00:25:21.538 01:33:13 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.538 01:33:13 -- host/failover.sh@92 -- # wait 733716 00:25:22.470 0 00:25:22.470 01:33:14 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:22.470 [2024-07-27 01:33:06.553982] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:22.470 [2024-07-27 01:33:06.554076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732384 ] 00:25:22.470 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.470 [2024-07-27 01:33:06.614454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.470 [2024-07-27 01:33:06.721112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.470 [2024-07-27 01:33:09.772413] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:22.470 [2024-07-27 01:33:09.772517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.470 [2024-07-27 01:33:09.772541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.470 [2024-07-27 01:33:09.772559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.470 [2024-07-27 01:33:09.772572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.470 [2024-07-27 01:33:09.772587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.470 [2024-07-27 01:33:09.772601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.470 [2024-07-27 01:33:09.772615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.470 [2024-07-27 01:33:09.772630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.470 [2024-07-27 01:33:09.772645] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.470 [2024-07-27 01:33:09.772703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.470 [2024-07-27 01:33:09.772736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887bd0 (9): Bad file descriptor 00:25:22.470 [2024-07-27 01:33:09.823145] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:22.470 Running I/O for 1 seconds... 00:25:22.470 00:25:22.470 Latency(us) 00:25:22.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.470 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:22.470 Verification LBA range: start 0x0 length 0x4000 00:25:22.470 NVMe0n1 : 1.01 11813.60 46.15 0.00 0.00 10784.01 1450.29 16893.72 00:25:22.470 =================================================================================================================== 00:25:22.470 Total : 11813.60 46.15 0.00 0.00 10784.01 1450.29 16893.72 00:25:22.470 01:33:14 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.471 01:33:14 -- host/failover.sh@95 -- # grep -q NVMe0 00:25:22.728 01:33:14 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.985 01:33:14 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.985 01:33:14 -- host/failover.sh@99 -- # grep -q NVMe0 00:25:23.242 01:33:14 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.501 01:33:15 -- host/failover.sh@101 -- # sleep 3 00:25:26.789 01:33:18 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.789 01:33:18 -- host/failover.sh@103 -- # grep -q NVMe0 00:25:26.789 01:33:18 -- host/failover.sh@108 -- # killprocess 732384 00:25:26.789 01:33:18 -- common/autotest_common.sh@926 -- # '[' -z 732384 ']' 00:25:26.789 01:33:18 -- common/autotest_common.sh@930 -- # kill -0 732384 00:25:26.789 01:33:18 -- common/autotest_common.sh@931 -- # uname 00:25:26.789 01:33:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:26.789 01:33:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 732384 00:25:26.789 01:33:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:26.789 01:33:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:26.789 01:33:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 732384' 00:25:26.789 killing process with pid 732384 00:25:26.789 01:33:18 -- common/autotest_common.sh@945 -- # kill 732384 00:25:26.789 01:33:18 -- common/autotest_common.sh@950 -- # wait 732384 00:25:27.085 01:33:18 -- host/failover.sh@110 -- # sync 00:25:27.085 01:33:18 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.344 01:33:18 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:27.344 01:33:18 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.344 01:33:18 -- host/failover.sh@116 -- # nvmftestfini 00:25:27.344 01:33:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:27.344 01:33:18 -- nvmf/common.sh@116 -- # sync 00:25:27.344 01:33:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:27.344 01:33:18 -- nvmf/common.sh@119 -- # set +e 00:25:27.344 01:33:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:27.344 01:33:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:27.344 rmmod nvme_tcp 00:25:27.344 rmmod nvme_fabrics 00:25:27.344 rmmod nvme_keyring 00:25:27.344 01:33:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:27.344 01:33:19 -- nvmf/common.sh@123 -- # set -e 00:25:27.344 01:33:19 -- nvmf/common.sh@124 -- # return 0 00:25:27.344 01:33:19 -- nvmf/common.sh@477 -- # '[' -n 729922 ']' 00:25:27.344 01:33:19 -- nvmf/common.sh@478 -- # killprocess 729922 00:25:27.344 01:33:19 -- common/autotest_common.sh@926 -- # '[' -z 729922 ']' 00:25:27.344 01:33:19 -- common/autotest_common.sh@930 -- # kill -0 729922 00:25:27.344 01:33:19 -- common/autotest_common.sh@931 -- # uname 00:25:27.344 01:33:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:27.344 01:33:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 729922 00:25:27.344 01:33:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:27.344 01:33:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:27.344 01:33:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 729922' 00:25:27.344 killing process with pid 729922 00:25:27.344 01:33:19 -- common/autotest_common.sh@945 -- # kill 729922 00:25:27.344 01:33:19 -- common/autotest_common.sh@950 -- # wait 729922 00:25:27.602 01:33:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:27.602 01:33:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:27.602 01:33:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:27.602 01:33:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.602 01:33:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:27.602 01:33:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.602 01:33:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.602 01:33:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.141 01:33:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:30.141 00:25:30.141 real 0m36.937s 00:25:30.141 user 2m9.157s 00:25:30.141 sys 0m6.748s 00:25:30.141 01:33:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.141 01:33:21 -- common/autotest_common.sh@10 -- # set +x 00:25:30.141 ************************************ 00:25:30.141 END TEST nvmf_failover 00:25:30.141 ************************************ 00:25:30.141 01:33:21 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:30.141 01:33:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:30.141 01:33:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.141 01:33:21 -- common/autotest_common.sh@10 -- # set +x 00:25:30.141 ************************************ 00:25:30.141 START TEST nvmf_discovery 00:25:30.141 ************************************ 00:25:30.141 01:33:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:30.141 * Looking for test storage... 00:25:30.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.141 01:33:21 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.141 01:33:21 -- nvmf/common.sh@7 -- # uname -s 00:25:30.141 01:33:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.141 01:33:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.141 01:33:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.141 01:33:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.141 01:33:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.141 01:33:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.141 01:33:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.141 01:33:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.141 01:33:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.141 01:33:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.141 01:33:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.141 01:33:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.141 01:33:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.141 01:33:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.141 01:33:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.141 01:33:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.141 01:33:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.141 01:33:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.141 01:33:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.141 01:33:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.141 01:33:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.141 01:33:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.141 01:33:21 -- paths/export.sh@5 -- # export PATH 00:25:30.141 01:33:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.141 01:33:21 -- nvmf/common.sh@46 -- # : 0 00:25:30.141 01:33:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:30.141 01:33:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:30.141 01:33:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:30.141 01:33:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.141 01:33:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.141 01:33:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:30.141 01:33:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:30.141 01:33:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:30.141 01:33:21 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:30.141 01:33:21 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:30.141 01:33:21 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:30.141 01:33:21 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:30.141 01:33:21 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:30.141 01:33:21 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:30.141 01:33:21 -- host/discovery.sh@25 -- # nvmftestinit 00:25:30.141 01:33:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:30.141 01:33:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.141 01:33:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:30.141 01:33:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:30.141 01:33:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:30.141 01:33:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.141 01:33:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.141 01:33:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.141 01:33:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:30.141 01:33:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:30.141 01:33:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:30.141 01:33:21 -- common/autotest_common.sh@10 -- # set +x 00:25:32.048 01:33:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:32.048 01:33:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:32.048 01:33:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:32.048 01:33:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:32.048 01:33:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:32.048 01:33:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:32.048 01:33:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:32.048 01:33:23 -- nvmf/common.sh@294 -- # net_devs=() 00:25:32.049 01:33:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:32.049 01:33:23 -- nvmf/common.sh@295 -- # e810=() 00:25:32.049 01:33:23 -- nvmf/common.sh@295 -- # local -ga e810 00:25:32.049 01:33:23 -- nvmf/common.sh@296 -- # x722=() 00:25:32.049 01:33:23 -- nvmf/common.sh@296 -- # local -ga x722 00:25:32.049 01:33:23 -- nvmf/common.sh@297 -- # mlx=() 00:25:32.049 01:33:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:32.049 01:33:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.049 01:33:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:32.049 01:33:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:32.049 01:33:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:32.049 01:33:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:32.049 01:33:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:32.049 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:32.049 01:33:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:32.049 01:33:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:32.049 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:32.049 01:33:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:32.049 01:33:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:32.049 01:33:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.049 01:33:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:32.049 01:33:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.049 01:33:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:32.049 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:32.049 01:33:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.049 01:33:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:32.049 01:33:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.049 01:33:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:32.049 01:33:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.049 01:33:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:32.049 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:32.049 01:33:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.049 01:33:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:32.049 01:33:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:32.049 01:33:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:32.049 01:33:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.049 01:33:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.049 01:33:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.049 01:33:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:32.049 01:33:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.049 01:33:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.049 01:33:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:32.049 01:33:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.049 01:33:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.049 01:33:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:32.049 01:33:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:32.049 01:33:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.049 01:33:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.049 01:33:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.049 01:33:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.049 01:33:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:32.049 01:33:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.049 01:33:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.049 01:33:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.049 01:33:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:32.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:25:32.049 00:25:32.049 --- 10.0.0.2 ping statistics --- 00:25:32.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.049 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:32.049 01:33:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:25:32.049 00:25:32.049 --- 10.0.0.1 ping statistics --- 00:25:32.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.049 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:25:32.049 01:33:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.049 01:33:23 -- nvmf/common.sh@410 -- # return 0 00:25:32.049 01:33:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:32.049 01:33:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.049 01:33:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:32.049 01:33:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.049 01:33:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:32.049 01:33:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:32.049 01:33:23 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:32.049 01:33:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:32.049 01:33:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:32.049 01:33:23 -- common/autotest_common.sh@10 -- # set +x 00:25:32.049 01:33:23 -- nvmf/common.sh@469 -- # nvmfpid=736346 00:25:32.049 01:33:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:32.049 01:33:23 -- nvmf/common.sh@470 -- # waitforlisten 736346 00:25:32.049 01:33:23 -- common/autotest_common.sh@819 -- # '[' -z 736346 ']' 00:25:32.049 01:33:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.049 01:33:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:32.049 01:33:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.049 01:33:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:32.049 01:33:23 -- common/autotest_common.sh@10 -- # set +x 00:25:32.049 [2024-07-27 01:33:23.490301] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:32.049 [2024-07-27 01:33:23.490385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.049 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.049 [2024-07-27 01:33:23.552951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.049 [2024-07-27 01:33:23.657407] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:32.049 [2024-07-27 01:33:23.657553] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.049 [2024-07-27 01:33:23.657572] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.049 [2024-07-27 01:33:23.657587] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.049 [2024-07-27 01:33:23.657613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.985 01:33:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:32.985 01:33:24 -- common/autotest_common.sh@852 -- # return 0 00:25:32.985 01:33:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:32.985 01:33:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:32.985 01:33:24 -- common/autotest_common.sh@10 -- # set +x 00:25:32.985 01:33:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.985 01:33:24 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:32.985 01:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.985 01:33:24 -- common/autotest_common.sh@10 -- # set +x 00:25:32.985 [2024-07-27 01:33:24.495784] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.985 01:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.985 01:33:24 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:32.985 01:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.985 01:33:24 -- common/autotest_common.sh@10 -- # set +x 00:25:32.985 [2024-07-27 01:33:24.503951] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:32.985 01:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.985 01:33:24 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:32.985 01:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.985 01:33:24 -- common/autotest_common.sh@10 -- # set +x 00:25:32.985 null0 00:25:32.985 01:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.985 01:33:24 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:32.985 01:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.985 01:33:24 -- common/autotest_common.sh@10 -- # set +x 00:25:32.985 null1 00:25:32.985 01:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.985 01:33:24 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:32.985 01:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.985 01:33:24 -- common/autotest_common.sh@10 -- # set +x 00:25:32.985 01:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.985 01:33:24 -- host/discovery.sh@45 -- # hostpid=736500 00:25:32.985 01:33:24 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:32.985 01:33:24 -- host/discovery.sh@46 -- # waitforlisten 736500 /tmp/host.sock 00:25:32.985 01:33:24 -- common/autotest_common.sh@819 -- # '[' -z 736500 ']' 00:25:32.985 01:33:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:32.985 01:33:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:32.985 01:33:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:32.985 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:32.985 01:33:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:32.985 01:33:24 -- common/autotest_common.sh@10 -- # set +x 00:25:32.985 [2024-07-27 01:33:24.571054] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:32.985 [2024-07-27 01:33:24.571127] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736500 ] 00:25:32.985 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.985 [2024-07-27 01:33:24.630875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.243 [2024-07-27 01:33:24.745080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:33.243 [2024-07-27 01:33:24.745265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.810 01:33:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:33.810 01:33:25 -- common/autotest_common.sh@852 -- # return 0 00:25:33.810 01:33:25 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:33.810 01:33:25 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:33.810 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.810 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:33.810 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.810 01:33:25 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:33.810 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.810 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:33.810 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.810 01:33:25 -- host/discovery.sh@72 -- # notify_id=0 00:25:33.810 01:33:25 -- host/discovery.sh@78 -- # get_subsystem_names 00:25:33.810 01:33:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.810 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.810 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:33.810 01:33:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.810 01:33:25 -- host/discovery.sh@59 -- # sort 00:25:33.810 01:33:25 -- host/discovery.sh@59 -- # xargs 00:25:33.810 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.068 01:33:25 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:25:34.068 01:33:25 -- host/discovery.sh@79 -- # get_bdev_list 00:25:34.068 01:33:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.068 01:33:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.068 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.068 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 01:33:25 -- host/discovery.sh@55 -- # sort 00:25:34.068 01:33:25 -- host/discovery.sh@55 -- # xargs 00:25:34.068 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.068 01:33:25 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:25:34.068 01:33:25 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:34.068 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.068 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.068 01:33:25 -- host/discovery.sh@82 -- # get_subsystem_names 00:25:34.068 01:33:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.068 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.068 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 01:33:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.068 01:33:25 -- host/discovery.sh@59 -- # sort 00:25:34.068 01:33:25 -- host/discovery.sh@59 -- # xargs 00:25:34.068 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.068 01:33:25 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:25:34.068 01:33:25 -- host/discovery.sh@83 -- # get_bdev_list 00:25:34.068 01:33:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.068 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.068 01:33:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.068 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 01:33:25 -- host/discovery.sh@55 -- # sort 00:25:34.068 01:33:25 -- host/discovery.sh@55 -- # xargs 00:25:34.068 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.068 01:33:25 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:34.068 01:33:25 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:34.068 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.068 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.068 01:33:25 -- host/discovery.sh@86 -- # get_subsystem_names 00:25:34.068 01:33:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.068 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.069 01:33:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.069 01:33:25 -- host/discovery.sh@59 -- # sort 00:25:34.069 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 01:33:25 -- host/discovery.sh@59 -- # xargs 00:25:34.069 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.069 01:33:25 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:25:34.069 01:33:25 -- host/discovery.sh@87 -- # get_bdev_list 00:25:34.069 01:33:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.069 01:33:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.069 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.069 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 01:33:25 -- host/discovery.sh@55 -- # sort 00:25:34.069 01:33:25 -- host/discovery.sh@55 -- # xargs 00:25:34.069 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.069 01:33:25 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:34.069 01:33:25 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:34.069 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.069 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 [2024-07-27 01:33:25.819688] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.069 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.069 01:33:25 -- host/discovery.sh@92 -- # get_subsystem_names 00:25:34.069 01:33:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.329 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.329 01:33:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.329 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 01:33:25 -- host/discovery.sh@59 -- # sort 00:25:34.329 01:33:25 -- host/discovery.sh@59 -- # xargs 00:25:34.329 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.329 01:33:25 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:34.329 01:33:25 -- host/discovery.sh@93 -- # get_bdev_list 00:25:34.329 01:33:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.329 01:33:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.329 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.329 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 01:33:25 -- host/discovery.sh@55 -- # sort 00:25:34.329 01:33:25 -- host/discovery.sh@55 -- # xargs 00:25:34.329 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.329 01:33:25 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:25:34.329 01:33:25 -- host/discovery.sh@94 -- # get_notification_count 00:25:34.329 01:33:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:34.329 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.329 01:33:25 -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.329 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.329 01:33:25 -- host/discovery.sh@74 -- # notification_count=0 00:25:34.329 01:33:25 -- host/discovery.sh@75 -- # notify_id=0 00:25:34.329 01:33:25 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:25:34.329 01:33:25 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:34.329 01:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.329 01:33:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 01:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.329 01:33:25 -- host/discovery.sh@100 -- # sleep 1 00:25:34.898 [2024-07-27 01:33:26.607268] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:34.898 [2024-07-27 01:33:26.607299] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:34.898 [2024-07-27 01:33:26.607323] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:35.156 [2024-07-27 01:33:26.694629] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:35.414 [2024-07-27 01:33:26.916036] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:35.414 [2024-07-27 01:33:26.916085] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:35.414 01:33:26 -- host/discovery.sh@101 -- # get_subsystem_names 00:25:35.414 01:33:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.414 01:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.414 01:33:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.414 01:33:26 -- common/autotest_common.sh@10 -- # set +x 00:25:35.414 01:33:26 -- host/discovery.sh@59 -- # sort 00:25:35.414 01:33:26 -- host/discovery.sh@59 -- # xargs 00:25:35.414 01:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.414 01:33:26 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.414 01:33:26 -- host/discovery.sh@102 -- # get_bdev_list 00:25:35.414 01:33:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.414 01:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.414 01:33:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.414 01:33:26 -- common/autotest_common.sh@10 -- # set +x 00:25:35.414 01:33:26 -- host/discovery.sh@55 -- # sort 00:25:35.414 01:33:26 -- host/discovery.sh@55 -- # xargs 00:25:35.414 01:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.414 01:33:27 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:35.414 01:33:27 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:25:35.414 01:33:27 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:35.414 01:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.414 01:33:27 -- common/autotest_common.sh@10 -- # set +x 00:25:35.414 01:33:27 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:35.414 01:33:27 -- host/discovery.sh@63 -- # sort -n 00:25:35.414 01:33:27 -- host/discovery.sh@63 -- # xargs 00:25:35.414 01:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.414 01:33:27 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:25:35.414 01:33:27 -- host/discovery.sh@104 -- # get_notification_count 00:25:35.414 01:33:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:35.414 01:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.414 01:33:27 -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.415 01:33:27 -- common/autotest_common.sh@10 -- # set +x 00:25:35.415 01:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.415 01:33:27 -- host/discovery.sh@74 -- # notification_count=1 00:25:35.415 01:33:27 -- host/discovery.sh@75 -- # notify_id=1 00:25:35.415 01:33:27 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:25:35.415 01:33:27 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:35.415 01:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.415 01:33:27 -- common/autotest_common.sh@10 -- # set +x 00:25:35.415 01:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.415 01:33:27 -- host/discovery.sh@109 -- # sleep 1 00:25:36.792 01:33:28 -- host/discovery.sh@110 -- # get_bdev_list 00:25:36.792 01:33:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.792 01:33:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.792 01:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.792 01:33:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.792 01:33:28 -- host/discovery.sh@55 -- # sort 00:25:36.792 01:33:28 -- host/discovery.sh@55 -- # xargs 00:25:36.792 01:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.792 01:33:28 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:36.792 01:33:28 -- host/discovery.sh@111 -- # get_notification_count 00:25:36.792 01:33:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:36.792 01:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.792 01:33:28 -- host/discovery.sh@74 -- # jq '. | length' 00:25:36.792 01:33:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.792 01:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.792 01:33:28 -- host/discovery.sh@74 -- # notification_count=1 00:25:36.792 01:33:28 -- host/discovery.sh@75 -- # notify_id=2 00:25:36.792 01:33:28 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:25:36.792 01:33:28 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:36.792 01:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.792 01:33:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.792 [2024-07-27 01:33:28.206630] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:36.792 [2024-07-27 01:33:28.207701] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:36.792 [2024-07-27 01:33:28.207738] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:36.792 01:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.792 01:33:28 -- host/discovery.sh@117 -- # sleep 1 00:25:36.792 [2024-07-27 01:33:28.336151] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:37.050 [2024-07-27 01:33:28.565307] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:37.050 [2024-07-27 01:33:28.565331] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:37.050 [2024-07-27 01:33:28.565341] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:37.616 01:33:29 -- host/discovery.sh@118 -- # get_subsystem_names 00:25:37.616 01:33:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.616 01:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.616 01:33:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.616 01:33:29 -- common/autotest_common.sh@10 -- # set +x 00:25:37.616 01:33:29 -- host/discovery.sh@59 -- # sort 00:25:37.616 01:33:29 -- host/discovery.sh@59 -- # xargs 00:25:37.616 01:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.616 01:33:29 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.616 01:33:29 -- host/discovery.sh@119 -- # get_bdev_list 00:25:37.616 01:33:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.616 01:33:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.616 01:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.616 01:33:29 -- host/discovery.sh@55 -- # sort 00:25:37.616 01:33:29 -- common/autotest_common.sh@10 -- # set +x 00:25:37.616 01:33:29 -- host/discovery.sh@55 -- # xargs 00:25:37.616 01:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.616 01:33:29 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.616 01:33:29 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:25:37.616 01:33:29 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:37.616 01:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.616 01:33:29 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:37.616 01:33:29 -- common/autotest_common.sh@10 -- # set +x 00:25:37.616 01:33:29 -- host/discovery.sh@63 -- # sort -n 00:25:37.616 01:33:29 -- host/discovery.sh@63 -- # xargs 00:25:37.616 01:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.616 01:33:29 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:37.616 01:33:29 -- host/discovery.sh@121 -- # get_notification_count 00:25:37.616 01:33:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:37.616 01:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.616 01:33:29 -- host/discovery.sh@74 -- # jq '. | length' 00:25:37.616 01:33:29 -- common/autotest_common.sh@10 -- # set +x 00:25:37.616 01:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.875 01:33:29 -- host/discovery.sh@74 -- # notification_count=0 00:25:37.875 01:33:29 -- host/discovery.sh@75 -- # notify_id=2 00:25:37.875 01:33:29 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:25:37.875 01:33:29 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:37.875 01:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.875 01:33:29 -- common/autotest_common.sh@10 -- # set +x 00:25:37.875 [2024-07-27 01:33:29.379002] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:37.875 [2024-07-27 01:33:29.379036] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:37.875 [2024-07-27 01:33:29.379158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.875 [2024-07-27 01:33:29.379186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.875 [2024-07-27 01:33:29.379210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.875 [2024-07-27 01:33:29.379225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.875 [2024-07-27 01:33:29.379240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.875 [2024-07-27 01:33:29.379253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.875 [2024-07-27 01:33:29.379267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.875 [2024-07-27 01:33:29.379281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.875 [2024-07-27 01:33:29.379295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabb80 is same with the state(5) to be set 00:25:37.875 01:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.875 01:33:29 -- host/discovery.sh@127 -- # sleep 1 00:25:37.875 [2024-07-27 01:33:29.389160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaabb80 (9): Bad file descriptor 00:25:37.875 [2024-07-27 01:33:29.399207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.875 [2024-07-27 01:33:29.399519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.875 [2024-07-27 01:33:29.399826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.875 [2024-07-27 01:33:29.399852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaabb80 with addr=10.0.0.2, port=4420 00:25:37.875 [2024-07-27 01:33:29.399869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabb80 is same with the state(5) to be set 00:25:37.876 [2024-07-27 01:33:29.399896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaabb80 (9): Bad file descriptor 00:25:37.876 [2024-07-27 01:33:29.399917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.876 [2024-07-27 01:33:29.399947] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.876 [2024-07-27 01:33:29.399971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.876 [2024-07-27 01:33:29.399994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.876 [2024-07-27 01:33:29.409283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.876 [2024-07-27 01:33:29.409517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.409679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.409719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaabb80 with addr=10.0.0.2, port=4420 00:25:37.876 [2024-07-27 01:33:29.409736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabb80 is same with the state(5) to be set 00:25:37.876 [2024-07-27 01:33:29.409758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaabb80 (9): Bad file descriptor 00:25:37.876 [2024-07-27 01:33:29.409778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.876 [2024-07-27 01:33:29.409806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.876 [2024-07-27 01:33:29.409819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.876 [2024-07-27 01:33:29.409863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.876 [2024-07-27 01:33:29.419369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.876 [2024-07-27 01:33:29.419617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.419813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.419839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaabb80 with addr=10.0.0.2, port=4420 00:25:37.876 [2024-07-27 01:33:29.419856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabb80 is same with the state(5) to be set 00:25:37.876 [2024-07-27 01:33:29.419878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaabb80 (9): Bad file descriptor 00:25:37.876 [2024-07-27 01:33:29.419899] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.876 [2024-07-27 01:33:29.419913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.876 [2024-07-27 01:33:29.419926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.876 [2024-07-27 01:33:29.419961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.876 [2024-07-27 01:33:29.429441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.876 [2024-07-27 01:33:29.429689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.429872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.429914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaabb80 with addr=10.0.0.2, port=4420 00:25:37.876 [2024-07-27 01:33:29.429931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabb80 is same with the state(5) to be set 00:25:37.876 [2024-07-27 01:33:29.429953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaabb80 (9): Bad file descriptor 00:25:37.876 [2024-07-27 01:33:29.429988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.876 [2024-07-27 01:33:29.430002] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.876 [2024-07-27 01:33:29.430015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.876 [2024-07-27 01:33:29.430033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.876 [2024-07-27 01:33:29.439522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.876 [2024-07-27 01:33:29.439737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.439922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.439952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaabb80 with addr=10.0.0.2, port=4420 00:25:37.876 [2024-07-27 01:33:29.439971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabb80 is same with the state(5) to be set 00:25:37.876 [2024-07-27 01:33:29.439996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaabb80 (9): Bad file descriptor 00:25:37.876 [2024-07-27 01:33:29.440019] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.876 [2024-07-27 01:33:29.440034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.876 [2024-07-27 01:33:29.440049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.876 [2024-07-27 01:33:29.440081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.876 [2024-07-27 01:33:29.449600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.876 [2024-07-27 01:33:29.449846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.450085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.450122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaabb80 with addr=10.0.0.2, port=4420 00:25:37.876 [2024-07-27 01:33:29.450138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabb80 is same with the state(5) to be set 00:25:37.876 [2024-07-27 01:33:29.450161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaabb80 (9): Bad file descriptor 00:25:37.876 [2024-07-27 01:33:29.450182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.876 [2024-07-27 01:33:29.450196] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.876 [2024-07-27 01:33:29.450209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.876 [2024-07-27 01:33:29.450244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.876 [2024-07-27 01:33:29.459681] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.876 [2024-07-27 01:33:29.459888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.460124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.876 [2024-07-27 01:33:29.460151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaabb80 with addr=10.0.0.2, port=4420 00:25:37.876 [2024-07-27 01:33:29.460168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabb80 is same with the state(5) to be set 00:25:37.876 [2024-07-27 01:33:29.460190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaabb80 (9): Bad file descriptor 00:25:37.876 [2024-07-27 01:33:29.460211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:37.876 [2024-07-27 01:33:29.460240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:37.876 [2024-07-27 01:33:29.460253] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:37.876 [2024-07-27 01:33:29.460273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.876 [2024-07-27 01:33:29.466357] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:37.876 [2024-07-27 01:33:29.466386] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:38.816 01:33:30 -- host/discovery.sh@128 -- # get_subsystem_names 00:25:38.816 01:33:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.816 01:33:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.816 01:33:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.816 01:33:30 -- common/autotest_common.sh@10 -- # set +x 00:25:38.816 01:33:30 -- host/discovery.sh@59 -- # sort 00:25:38.816 01:33:30 -- host/discovery.sh@59 -- # xargs 00:25:38.816 01:33:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.816 01:33:30 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.816 01:33:30 -- host/discovery.sh@129 -- # get_bdev_list 00:25:38.816 01:33:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.816 01:33:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.816 01:33:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.816 01:33:30 -- common/autotest_common.sh@10 -- # set +x 00:25:38.816 01:33:30 -- host/discovery.sh@55 -- # sort 00:25:38.816 01:33:30 -- host/discovery.sh@55 -- # xargs 00:25:38.816 01:33:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.816 01:33:30 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:38.816 01:33:30 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:25:38.816 01:33:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:38.816 01:33:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.816 01:33:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:38.816 01:33:30 -- common/autotest_common.sh@10 -- # set +x 00:25:38.816 01:33:30 -- host/discovery.sh@63 -- # sort -n 00:25:38.816 01:33:30 -- host/discovery.sh@63 -- # xargs 00:25:38.816 01:33:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.816 01:33:30 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:25:38.816 01:33:30 -- host/discovery.sh@131 -- # get_notification_count 00:25:38.816 01:33:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:38.816 01:33:30 -- host/discovery.sh@74 -- # jq '. | length' 00:25:38.816 01:33:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.816 01:33:30 -- common/autotest_common.sh@10 -- # set +x 00:25:38.816 01:33:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.816 01:33:30 -- host/discovery.sh@74 -- # notification_count=0 00:25:38.816 01:33:30 -- host/discovery.sh@75 -- # notify_id=2 00:25:38.816 01:33:30 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:25:38.816 01:33:30 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:38.816 01:33:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.816 01:33:30 -- common/autotest_common.sh@10 -- # set +x 00:25:38.816 01:33:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.816 01:33:30 -- host/discovery.sh@135 -- # sleep 1 00:25:40.196 01:33:31 -- host/discovery.sh@136 -- # get_subsystem_names 00:25:40.196 01:33:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.196 01:33:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.196 01:33:31 -- host/discovery.sh@59 -- # sort 00:25:40.196 01:33:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.196 01:33:31 -- common/autotest_common.sh@10 -- # set +x 00:25:40.196 01:33:31 -- host/discovery.sh@59 -- # xargs 00:25:40.196 01:33:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.196 01:33:31 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:25:40.196 01:33:31 -- host/discovery.sh@137 -- # get_bdev_list 00:25:40.196 01:33:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.196 01:33:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.196 01:33:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.196 01:33:31 -- common/autotest_common.sh@10 -- # set +x 00:25:40.196 01:33:31 -- host/discovery.sh@55 -- # sort 00:25:40.196 01:33:31 -- host/discovery.sh@55 -- # xargs 00:25:40.196 01:33:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.196 01:33:31 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:25:40.196 01:33:31 -- host/discovery.sh@138 -- # get_notification_count 00:25:40.196 01:33:31 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:40.196 01:33:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.196 01:33:31 -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.196 01:33:31 -- common/autotest_common.sh@10 -- # set +x 00:25:40.196 01:33:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.196 01:33:31 -- host/discovery.sh@74 -- # notification_count=2 00:25:40.196 01:33:31 -- host/discovery.sh@75 -- # notify_id=4 00:25:40.196 01:33:31 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:25:40.196 01:33:31 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.196 01:33:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.196 01:33:31 -- common/autotest_common.sh@10 -- # set +x 00:25:41.130 [2024-07-27 01:33:32.733264] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:41.130 [2024-07-27 01:33:32.733290] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:41.130 [2024-07-27 01:33:32.733314] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.130 [2024-07-27 01:33:32.820625] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:41.389 [2024-07-27 01:33:32.926197] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:41.389 [2024-07-27 01:33:32.926230] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:41.389 01:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.389 01:33:32 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:41.389 01:33:32 -- common/autotest_common.sh@640 -- # local es=0 00:25:41.389 01:33:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:41.389 01:33:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:41.389 01:33:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.389 01:33:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:41.389 01:33:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.389 01:33:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:41.389 01:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.389 01:33:32 -- common/autotest_common.sh@10 -- # set +x 00:25:41.389 request: 00:25:41.389 { 00:25:41.389 "name": "nvme", 00:25:41.389 "trtype": "tcp", 00:25:41.389 "traddr": "10.0.0.2", 00:25:41.389 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:41.389 "adrfam": "ipv4", 00:25:41.389 "trsvcid": "8009", 00:25:41.389 "wait_for_attach": true, 00:25:41.389 "method": "bdev_nvme_start_discovery", 00:25:41.389 "req_id": 1 00:25:41.389 } 00:25:41.389 Got JSON-RPC error response 00:25:41.389 response: 00:25:41.389 { 00:25:41.389 "code": -17, 00:25:41.389 "message": "File exists" 00:25:41.389 } 00:25:41.389 01:33:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:41.389 01:33:32 -- common/autotest_common.sh@643 -- # es=1 00:25:41.389 01:33:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:41.389 01:33:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:41.389 01:33:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:41.389 01:33:32 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:25:41.389 01:33:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:41.389 01:33:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:41.389 01:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.389 01:33:32 -- common/autotest_common.sh@10 -- # set +x 00:25:41.389 01:33:32 -- host/discovery.sh@67 -- # sort 00:25:41.389 01:33:32 -- host/discovery.sh@67 -- # xargs 00:25:41.389 01:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.389 01:33:32 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:25:41.389 01:33:32 -- host/discovery.sh@147 -- # get_bdev_list 00:25:41.389 01:33:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.389 01:33:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.389 01:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.389 01:33:32 -- common/autotest_common.sh@10 -- # set +x 00:25:41.389 01:33:32 -- host/discovery.sh@55 -- # sort 00:25:41.389 01:33:32 -- host/discovery.sh@55 -- # xargs 00:25:41.389 01:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.389 01:33:33 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.389 01:33:33 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:41.389 01:33:33 -- common/autotest_common.sh@640 -- # local es=0 00:25:41.389 01:33:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:41.389 01:33:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:41.389 01:33:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.389 01:33:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:41.389 01:33:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.389 01:33:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:41.389 01:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.389 01:33:33 -- common/autotest_common.sh@10 -- # set +x 00:25:41.389 request: 00:25:41.389 { 00:25:41.389 "name": "nvme_second", 00:25:41.389 "trtype": "tcp", 00:25:41.389 "traddr": "10.0.0.2", 00:25:41.389 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:41.389 "adrfam": "ipv4", 00:25:41.389 "trsvcid": "8009", 00:25:41.389 "wait_for_attach": true, 00:25:41.389 "method": "bdev_nvme_start_discovery", 00:25:41.389 "req_id": 1 00:25:41.389 } 00:25:41.389 Got JSON-RPC error response 00:25:41.389 response: 00:25:41.389 { 00:25:41.389 "code": -17, 00:25:41.389 "message": "File exists" 00:25:41.389 } 00:25:41.389 01:33:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:41.389 01:33:33 -- common/autotest_common.sh@643 -- # es=1 00:25:41.389 01:33:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:41.389 01:33:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:41.389 01:33:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:41.390 01:33:33 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:25:41.390 01:33:33 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:41.390 01:33:33 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:41.390 01:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.390 01:33:33 -- common/autotest_common.sh@10 -- # set +x 00:25:41.390 01:33:33 -- host/discovery.sh@67 -- # sort 00:25:41.390 01:33:33 -- host/discovery.sh@67 -- # xargs 00:25:41.390 01:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.390 01:33:33 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:25:41.390 01:33:33 -- host/discovery.sh@153 -- # get_bdev_list 00:25:41.390 01:33:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.390 01:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.390 01:33:33 -- common/autotest_common.sh@10 -- # set +x 00:25:41.390 01:33:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.390 01:33:33 -- host/discovery.sh@55 -- # sort 00:25:41.390 01:33:33 -- host/discovery.sh@55 -- # xargs 00:25:41.390 01:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.390 01:33:33 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.390 01:33:33 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:41.390 01:33:33 -- common/autotest_common.sh@640 -- # local es=0 00:25:41.390 01:33:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:41.390 01:33:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:41.390 01:33:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.390 01:33:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:41.390 01:33:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.390 01:33:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:41.390 01:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.390 01:33:33 -- common/autotest_common.sh@10 -- # set +x 00:25:42.769 [2024-07-27 01:33:34.134014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.769 [2024-07-27 01:33:34.134242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.769 [2024-07-27 01:33:34.134271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa1070 with addr=10.0.0.2, port=8010 00:25:42.769 [2024-07-27 01:33:34.134303] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:42.769 [2024-07-27 01:33:34.134320] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:42.769 [2024-07-27 01:33:34.134333] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:43.706 [2024-07-27 01:33:35.136529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.706 [2024-07-27 01:33:35.136846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.706 [2024-07-27 01:33:35.136875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa1070 with addr=10.0.0.2, port=8010 00:25:43.706 [2024-07-27 01:33:35.136921] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:43.706 [2024-07-27 01:33:35.136938] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:43.706 [2024-07-27 01:33:35.136951] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:44.671 [2024-07-27 01:33:36.138577] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:44.671 request: 00:25:44.671 { 00:25:44.671 "name": "nvme_second", 00:25:44.671 "trtype": "tcp", 00:25:44.671 "traddr": "10.0.0.2", 00:25:44.671 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.671 "adrfam": "ipv4", 00:25:44.671 "trsvcid": "8010", 00:25:44.671 "attach_timeout_ms": 3000, 00:25:44.671 "method": "bdev_nvme_start_discovery", 00:25:44.671 "req_id": 1 00:25:44.671 } 00:25:44.671 Got JSON-RPC error response 00:25:44.671 response: 00:25:44.671 { 00:25:44.671 "code": -110, 00:25:44.671 "message": "Connection timed out" 00:25:44.671 } 00:25:44.671 01:33:36 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:44.671 01:33:36 -- common/autotest_common.sh@643 -- # es=1 00:25:44.671 01:33:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:44.671 01:33:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:44.671 01:33:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:44.671 01:33:36 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:25:44.671 01:33:36 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.671 01:33:36 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.672 01:33:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.672 01:33:36 -- common/autotest_common.sh@10 -- # set +x 00:25:44.672 01:33:36 -- host/discovery.sh@67 -- # sort 00:25:44.672 01:33:36 -- host/discovery.sh@67 -- # xargs 00:25:44.672 01:33:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.672 01:33:36 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:25:44.672 01:33:36 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:25:44.672 01:33:36 -- host/discovery.sh@162 -- # kill 736500 00:25:44.672 01:33:36 -- host/discovery.sh@163 -- # nvmftestfini 00:25:44.672 01:33:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:44.672 01:33:36 -- nvmf/common.sh@116 -- # sync 00:25:44.672 01:33:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:44.672 01:33:36 -- nvmf/common.sh@119 -- # set +e 00:25:44.672 01:33:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:44.672 01:33:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:44.672 rmmod nvme_tcp 00:25:44.672 rmmod nvme_fabrics 00:25:44.672 rmmod nvme_keyring 00:25:44.672 01:33:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:44.672 01:33:36 -- nvmf/common.sh@123 -- # set -e 00:25:44.672 01:33:36 -- nvmf/common.sh@124 -- # return 0 00:25:44.672 01:33:36 -- nvmf/common.sh@477 -- # '[' -n 736346 ']' 00:25:44.672 01:33:36 -- nvmf/common.sh@478 -- # killprocess 736346 00:25:44.672 01:33:36 -- common/autotest_common.sh@926 -- # '[' -z 736346 ']' 00:25:44.672 01:33:36 -- common/autotest_common.sh@930 -- # kill -0 736346 00:25:44.672 01:33:36 -- common/autotest_common.sh@931 -- # uname 00:25:44.672 01:33:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:44.672 01:33:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 736346 00:25:44.672 01:33:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:44.672 01:33:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:44.672 01:33:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 736346' 00:25:44.672 killing process with pid 736346 00:25:44.672 01:33:36 -- common/autotest_common.sh@945 -- # kill 736346 00:25:44.672 01:33:36 -- common/autotest_common.sh@950 -- # wait 736346 00:25:44.937 01:33:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:44.937 01:33:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:44.937 01:33:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:44.937 01:33:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.937 01:33:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:44.937 01:33:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.937 01:33:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.937 01:33:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.841 01:33:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:46.841 00:25:46.841 real 0m17.173s 00:25:46.841 user 0m26.838s 00:25:46.841 sys 0m2.785s 00:25:46.841 01:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.841 01:33:38 -- common/autotest_common.sh@10 -- # set +x 00:25:46.841 ************************************ 00:25:46.841 END TEST nvmf_discovery 00:25:46.841 ************************************ 00:25:47.099 01:33:38 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:47.099 01:33:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:47.099 01:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:47.099 01:33:38 -- common/autotest_common.sh@10 -- # set +x 00:25:47.099 ************************************ 00:25:47.099 START TEST nvmf_discovery_remove_ifc 00:25:47.099 ************************************ 00:25:47.099 01:33:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:47.099 * Looking for test storage... 00:25:47.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.099 01:33:38 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.099 01:33:38 -- nvmf/common.sh@7 -- # uname -s 00:25:47.099 01:33:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.099 01:33:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.099 01:33:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.099 01:33:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.099 01:33:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.099 01:33:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.099 01:33:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.099 01:33:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.099 01:33:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.099 01:33:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.099 01:33:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:47.099 01:33:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:47.099 01:33:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.099 01:33:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.099 01:33:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.099 01:33:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.099 01:33:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.099 01:33:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.099 01:33:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.100 01:33:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.100 01:33:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.100 01:33:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.100 01:33:38 -- paths/export.sh@5 -- # export PATH 00:25:47.100 01:33:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.100 01:33:38 -- nvmf/common.sh@46 -- # : 0 00:25:47.100 01:33:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:47.100 01:33:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:47.100 01:33:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:47.100 01:33:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.100 01:33:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.100 01:33:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:47.100 01:33:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:47.100 01:33:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:47.100 01:33:38 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:47.100 01:33:38 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:47.100 01:33:38 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:47.100 01:33:38 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:47.100 01:33:38 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:47.100 01:33:38 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:47.100 01:33:38 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:47.100 01:33:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:47.100 01:33:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.100 01:33:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:47.100 01:33:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:47.100 01:33:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:47.100 01:33:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.100 01:33:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:47.100 01:33:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.100 01:33:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:47.100 01:33:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:47.100 01:33:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:47.100 01:33:38 -- common/autotest_common.sh@10 -- # set +x 00:25:49.004 01:33:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:49.004 01:33:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:49.004 01:33:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:49.004 01:33:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:49.004 01:33:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:49.004 01:33:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:49.004 01:33:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:49.004 01:33:40 -- nvmf/common.sh@294 -- # net_devs=() 00:25:49.004 01:33:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:49.004 01:33:40 -- nvmf/common.sh@295 -- # e810=() 00:25:49.004 01:33:40 -- nvmf/common.sh@295 -- # local -ga e810 00:25:49.004 01:33:40 -- nvmf/common.sh@296 -- # x722=() 00:25:49.004 01:33:40 -- nvmf/common.sh@296 -- # local -ga x722 00:25:49.004 01:33:40 -- nvmf/common.sh@297 -- # mlx=() 00:25:49.004 01:33:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:49.004 01:33:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.004 01:33:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:49.004 01:33:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:49.004 01:33:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:49.004 01:33:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:49.004 01:33:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:49.004 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:49.004 01:33:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:49.004 01:33:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:49.004 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:49.004 01:33:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:49.004 01:33:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:49.004 01:33:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.004 01:33:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:49.004 01:33:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.004 01:33:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:49.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:49.004 01:33:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.004 01:33:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:49.004 01:33:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.004 01:33:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:49.004 01:33:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.004 01:33:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:49.004 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:49.004 01:33:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.004 01:33:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:49.004 01:33:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:49.004 01:33:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:49.004 01:33:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:49.004 01:33:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.004 01:33:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.004 01:33:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.004 01:33:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:49.004 01:33:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.004 01:33:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.004 01:33:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:49.004 01:33:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.004 01:33:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.004 01:33:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:49.004 01:33:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:49.004 01:33:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.004 01:33:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.004 01:33:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.004 01:33:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.004 01:33:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:49.004 01:33:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.004 01:33:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.004 01:33:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.004 01:33:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:49.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:25:49.004 00:25:49.004 --- 10.0.0.2 ping statistics --- 00:25:49.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.004 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:49.004 01:33:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:25:49.004 00:25:49.004 --- 10.0.0.1 ping statistics --- 00:25:49.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.004 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:49.004 01:33:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.004 01:33:40 -- nvmf/common.sh@410 -- # return 0 00:25:49.004 01:33:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:49.004 01:33:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.262 01:33:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:49.262 01:33:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:49.263 01:33:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.263 01:33:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:49.263 01:33:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:49.263 01:33:40 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:49.263 01:33:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:49.263 01:33:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:49.263 01:33:40 -- common/autotest_common.sh@10 -- # set +x 00:25:49.263 01:33:40 -- nvmf/common.sh@469 -- # nvmfpid=740092 00:25:49.263 01:33:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:49.263 01:33:40 -- nvmf/common.sh@470 -- # waitforlisten 740092 00:25:49.263 01:33:40 -- common/autotest_common.sh@819 -- # '[' -z 740092 ']' 00:25:49.263 01:33:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.263 01:33:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:49.263 01:33:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.263 01:33:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:49.263 01:33:40 -- common/autotest_common.sh@10 -- # set +x 00:25:49.263 [2024-07-27 01:33:40.834393] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:49.263 [2024-07-27 01:33:40.834487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.263 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.263 [2024-07-27 01:33:40.911795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.521 [2024-07-27 01:33:41.030613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:49.521 [2024-07-27 01:33:41.030756] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.521 [2024-07-27 01:33:41.030773] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.521 [2024-07-27 01:33:41.030785] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.521 [2024-07-27 01:33:41.030813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.085 01:33:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:50.085 01:33:41 -- common/autotest_common.sh@852 -- # return 0 00:25:50.085 01:33:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:50.085 01:33:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:50.085 01:33:41 -- common/autotest_common.sh@10 -- # set +x 00:25:50.085 01:33:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.085 01:33:41 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:50.085 01:33:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.085 01:33:41 -- common/autotest_common.sh@10 -- # set +x 00:25:50.085 [2024-07-27 01:33:41.835631] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.342 [2024-07-27 01:33:41.843818] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:50.342 null0 00:25:50.342 [2024-07-27 01:33:41.875780] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.342 01:33:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.342 01:33:41 -- host/discovery_remove_ifc.sh@59 -- # hostpid=740235 00:25:50.342 01:33:41 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:50.342 01:33:41 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 740235 /tmp/host.sock 00:25:50.342 01:33:41 -- common/autotest_common.sh@819 -- # '[' -z 740235 ']' 00:25:50.342 01:33:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:50.342 01:33:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:50.342 01:33:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:50.342 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:50.342 01:33:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:50.342 01:33:41 -- common/autotest_common.sh@10 -- # set +x 00:25:50.342 [2024-07-27 01:33:41.939056] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:50.342 [2024-07-27 01:33:41.939167] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740235 ] 00:25:50.342 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.342 [2024-07-27 01:33:41.997720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.598 [2024-07-27 01:33:42.103551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:50.598 [2024-07-27 01:33:42.103723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.598 01:33:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:50.598 01:33:42 -- common/autotest_common.sh@852 -- # return 0 00:25:50.598 01:33:42 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:50.598 01:33:42 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:50.598 01:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.598 01:33:42 -- common/autotest_common.sh@10 -- # set +x 00:25:50.598 01:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.598 01:33:42 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:50.598 01:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.598 01:33:42 -- common/autotest_common.sh@10 -- # set +x 00:25:50.598 01:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.598 01:33:42 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:50.598 01:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.598 01:33:42 -- common/autotest_common.sh@10 -- # set +x 00:25:51.533 [2024-07-27 01:33:43.273721] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:51.533 [2024-07-27 01:33:43.273756] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:51.533 [2024-07-27 01:33:43.273784] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:51.792 [2024-07-27 01:33:43.401315] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:51.792 [2024-07-27 01:33:43.464124] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:51.792 [2024-07-27 01:33:43.464174] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:51.792 [2024-07-27 01:33:43.464212] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:51.792 [2024-07-27 01:33:43.464235] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:51.792 [2024-07-27 01:33:43.464260] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:51.792 01:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.792 01:33:43 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:51.792 01:33:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.792 01:33:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.792 01:33:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.792 01:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.792 01:33:43 -- common/autotest_common.sh@10 -- # set +x 00:25:51.792 01:33:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.792 01:33:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.792 [2024-07-27 01:33:43.471963] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x228a1f0 was disconnected and freed. delete nvme_qpair. 00:25:51.792 01:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.792 01:33:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:51.792 01:33:43 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:51.792 01:33:43 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:52.050 01:33:43 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:52.050 01:33:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.050 01:33:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.050 01:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.050 01:33:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.050 01:33:43 -- common/autotest_common.sh@10 -- # set +x 00:25:52.050 01:33:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.050 01:33:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.050 01:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.050 01:33:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:52.050 01:33:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:52.986 01:33:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.986 01:33:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.986 01:33:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.986 01:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.986 01:33:44 -- common/autotest_common.sh@10 -- # set +x 00:25:52.986 01:33:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.986 01:33:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.986 01:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.986 01:33:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:52.986 01:33:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.917 01:33:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.917 01:33:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.917 01:33:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.917 01:33:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.917 01:33:45 -- common/autotest_common.sh@10 -- # set +x 00:25:53.917 01:33:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.917 01:33:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.917 01:33:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.176 01:33:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:54.177 01:33:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:55.112 01:33:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.112 01:33:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.112 01:33:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.112 01:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.112 01:33:46 -- common/autotest_common.sh@10 -- # set +x 00:25:55.112 01:33:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.112 01:33:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.112 01:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.112 01:33:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:55.112 01:33:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:56.045 01:33:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.045 01:33:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.045 01:33:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.045 01:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.045 01:33:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.045 01:33:47 -- common/autotest_common.sh@10 -- # set +x 00:25:56.045 01:33:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.045 01:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.045 01:33:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:56.045 01:33:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.419 01:33:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.419 01:33:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.419 01:33:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.419 01:33:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.419 01:33:48 -- common/autotest_common.sh@10 -- # set +x 00:25:57.419 01:33:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.419 01:33:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.419 01:33:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.419 01:33:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:57.419 01:33:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.419 [2024-07-27 01:33:48.905140] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:57.420 [2024-07-27 01:33:48.905199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.420 [2024-07-27 01:33:48.905220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.420 [2024-07-27 01:33:48.905236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.420 [2024-07-27 01:33:48.905249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.420 [2024-07-27 01:33:48.905262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.420 [2024-07-27 01:33:48.905274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.420 [2024-07-27 01:33:48.905287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.420 [2024-07-27 01:33:48.905300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.420 [2024-07-27 01:33:48.905314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.420 [2024-07-27 01:33:48.905327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.420 [2024-07-27 01:33:48.905355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2250810 is same with the state(5) to be set 00:25:57.420 [2024-07-27 01:33:48.915174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2250810 (9): Bad file descriptor 00:25:57.420 [2024-07-27 01:33:48.925206] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:58.356 01:33:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.356 01:33:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.356 01:33:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.357 01:33:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.357 01:33:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.357 01:33:49 -- common/autotest_common.sh@10 -- # set +x 00:25:58.357 01:33:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.357 [2024-07-27 01:33:49.965098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:59.287 [2024-07-27 01:33:50.989152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:59.287 [2024-07-27 01:33:50.989226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2250810 with addr=10.0.0.2, port=4420 00:25:59.287 [2024-07-27 01:33:50.989254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2250810 is same with the state(5) to be set 00:25:59.287 [2024-07-27 01:33:50.989290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.287 [2024-07-27 01:33:50.989309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.287 [2024-07-27 01:33:50.989323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.287 [2024-07-27 01:33:50.989339] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:59.287 [2024-07-27 01:33:50.989836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2250810 (9): Bad file descriptor 00:25:59.287 [2024-07-27 01:33:50.989886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.287 [2024-07-27 01:33:50.989955] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:59.287 [2024-07-27 01:33:50.990009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.287 [2024-07-27 01:33:50.990032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-07-27 01:33:50.990092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.287 [2024-07-27 01:33:50.990109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-07-27 01:33:50.990125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.287 [2024-07-27 01:33:50.990139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-07-27 01:33:50.990153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.287 [2024-07-27 01:33:50.990166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-07-27 01:33:50.990181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.287 [2024-07-27 01:33:50.990195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-07-27 01:33:50.990209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:59.287 [2024-07-27 01:33:50.990299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2250c20 (9): Bad file descriptor 00:25:59.287 [2024-07-27 01:33:50.991335] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:59.287 [2024-07-27 01:33:50.991386] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:59.287 01:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.287 01:33:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.287 01:33:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.663 01:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.663 01:33:52 -- common/autotest_common.sh@10 -- # set +x 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.663 01:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.663 01:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.663 01:33:52 -- common/autotest_common.sh@10 -- # set +x 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.663 01:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:00.663 01:33:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.600 [2024-07-27 01:33:53.009713] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:01.600 [2024-07-27 01:33:53.009737] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:01.600 [2024-07-27 01:33:53.009762] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.600 [2024-07-27 01:33:53.097044] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:01.600 01:33:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.600 01:33:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.600 01:33:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.600 01:33:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.600 01:33:53 -- common/autotest_common.sh@10 -- # set +x 00:26:01.600 01:33:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.600 01:33:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.600 01:33:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.600 01:33:53 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:01.600 01:33:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.600 [2024-07-27 01:33:53.280416] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:01.600 [2024-07-27 01:33:53.280471] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:01.600 [2024-07-27 01:33:53.280510] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:01.600 [2024-07-27 01:33:53.280535] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:01.600 [2024-07-27 01:33:53.280552] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:01.600 [2024-07-27 01:33:53.328812] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x225f1d0 was disconnected and freed. delete nvme_qpair. 00:26:02.569 01:33:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.569 01:33:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.569 01:33:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.569 01:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:02.569 01:33:54 -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 01:33:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.569 01:33:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.569 01:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:02.569 01:33:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:02.569 01:33:54 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:02.569 01:33:54 -- host/discovery_remove_ifc.sh@90 -- # killprocess 740235 00:26:02.569 01:33:54 -- common/autotest_common.sh@926 -- # '[' -z 740235 ']' 00:26:02.569 01:33:54 -- common/autotest_common.sh@930 -- # kill -0 740235 00:26:02.569 01:33:54 -- common/autotest_common.sh@931 -- # uname 00:26:02.569 01:33:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:02.569 01:33:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 740235 00:26:02.569 01:33:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:02.569 01:33:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:02.569 01:33:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 740235' 00:26:02.569 killing process with pid 740235 00:26:02.569 01:33:54 -- common/autotest_common.sh@945 -- # kill 740235 00:26:02.569 01:33:54 -- common/autotest_common.sh@950 -- # wait 740235 00:26:02.829 01:33:54 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:02.829 01:33:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:02.829 01:33:54 -- nvmf/common.sh@116 -- # sync 00:26:02.829 01:33:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:02.829 01:33:54 -- nvmf/common.sh@119 -- # set +e 00:26:02.829 01:33:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:02.829 01:33:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:02.829 rmmod nvme_tcp 00:26:02.829 rmmod nvme_fabrics 00:26:02.829 rmmod nvme_keyring 00:26:02.829 01:33:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:02.829 01:33:54 -- nvmf/common.sh@123 -- # set -e 00:26:02.829 01:33:54 -- nvmf/common.sh@124 -- # return 0 00:26:02.829 01:33:54 -- nvmf/common.sh@477 -- # '[' -n 740092 ']' 00:26:02.829 01:33:54 -- nvmf/common.sh@478 -- # killprocess 740092 00:26:02.829 01:33:54 -- common/autotest_common.sh@926 -- # '[' -z 740092 ']' 00:26:02.829 01:33:54 -- common/autotest_common.sh@930 -- # kill -0 740092 00:26:02.829 01:33:54 -- common/autotest_common.sh@931 -- # uname 00:26:02.829 01:33:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:02.829 01:33:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 740092 00:26:02.829 01:33:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:02.829 01:33:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:02.829 01:33:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 740092' 00:26:02.829 killing process with pid 740092 00:26:02.829 01:33:54 -- common/autotest_common.sh@945 -- # kill 740092 00:26:02.829 01:33:54 -- common/autotest_common.sh@950 -- # wait 740092 00:26:03.087 01:33:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:03.088 01:33:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:03.088 01:33:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:03.088 01:33:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.088 01:33:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:03.088 01:33:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.088 01:33:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.088 01:33:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.626 01:33:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:05.626 00:26:05.626 real 0m18.261s 00:26:05.626 user 0m25.310s 00:26:05.626 sys 0m2.969s 00:26:05.626 01:33:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.626 01:33:56 -- common/autotest_common.sh@10 -- # set +x 00:26:05.626 ************************************ 00:26:05.626 END TEST nvmf_discovery_remove_ifc 00:26:05.626 ************************************ 00:26:05.626 01:33:56 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:26:05.626 01:33:56 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:05.626 01:33:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:05.626 01:33:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:05.626 01:33:56 -- common/autotest_common.sh@10 -- # set +x 00:26:05.626 ************************************ 00:26:05.626 START TEST nvmf_digest 00:26:05.626 ************************************ 00:26:05.626 01:33:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:05.626 * Looking for test storage... 00:26:05.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.626 01:33:56 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.626 01:33:56 -- nvmf/common.sh@7 -- # uname -s 00:26:05.626 01:33:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.626 01:33:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.626 01:33:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.626 01:33:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.626 01:33:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.626 01:33:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.626 01:33:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.626 01:33:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.626 01:33:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.626 01:33:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.626 01:33:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:05.626 01:33:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:05.626 01:33:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.626 01:33:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.626 01:33:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.626 01:33:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.626 01:33:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.626 01:33:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.626 01:33:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.626 01:33:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.626 01:33:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.626 01:33:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.626 01:33:56 -- paths/export.sh@5 -- # export PATH 00:26:05.626 01:33:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.626 01:33:56 -- nvmf/common.sh@46 -- # : 0 00:26:05.626 01:33:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:05.626 01:33:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:05.626 01:33:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:05.626 01:33:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.626 01:33:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.626 01:33:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:05.626 01:33:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:05.626 01:33:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:05.626 01:33:56 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:05.626 01:33:56 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:05.626 01:33:56 -- host/digest.sh@16 -- # runtime=2 00:26:05.626 01:33:56 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:26:05.626 01:33:56 -- host/digest.sh@132 -- # nvmftestinit 00:26:05.626 01:33:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:05.626 01:33:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.626 01:33:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:05.626 01:33:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:05.626 01:33:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:05.626 01:33:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.626 01:33:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.626 01:33:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.626 01:33:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:05.626 01:33:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:05.626 01:33:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:05.626 01:33:56 -- common/autotest_common.sh@10 -- # set +x 00:26:07.532 01:33:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:07.532 01:33:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:07.532 01:33:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:07.532 01:33:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:07.532 01:33:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:07.532 01:33:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:07.532 01:33:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:07.532 01:33:59 -- nvmf/common.sh@294 -- # net_devs=() 00:26:07.532 01:33:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:07.532 01:33:59 -- nvmf/common.sh@295 -- # e810=() 00:26:07.532 01:33:59 -- nvmf/common.sh@295 -- # local -ga e810 00:26:07.532 01:33:59 -- nvmf/common.sh@296 -- # x722=() 00:26:07.532 01:33:59 -- nvmf/common.sh@296 -- # local -ga x722 00:26:07.532 01:33:59 -- nvmf/common.sh@297 -- # mlx=() 00:26:07.532 01:33:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:07.532 01:33:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.532 01:33:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:07.532 01:33:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:07.532 01:33:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:07.532 01:33:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:07.532 01:33:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:07.532 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:07.532 01:33:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:07.532 01:33:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:07.532 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:07.532 01:33:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:07.532 01:33:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:07.532 01:33:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:07.532 01:33:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.532 01:33:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:07.533 01:33:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.533 01:33:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:07.533 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:07.533 01:33:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.533 01:33:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:07.533 01:33:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.533 01:33:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:07.533 01:33:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.533 01:33:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:07.533 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:07.533 01:33:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.533 01:33:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:07.533 01:33:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:07.533 01:33:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:07.533 01:33:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:07.533 01:33:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:07.533 01:33:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.533 01:33:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.533 01:33:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.533 01:33:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:07.533 01:33:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.533 01:33:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.533 01:33:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:07.533 01:33:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.533 01:33:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.533 01:33:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:07.533 01:33:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:07.533 01:33:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.533 01:33:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.533 01:33:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.533 01:33:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.533 01:33:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:07.533 01:33:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.533 01:33:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.533 01:33:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.533 01:33:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:07.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:26:07.533 00:26:07.533 --- 10.0.0.2 ping statistics --- 00:26:07.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.533 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:26:07.533 01:33:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:26:07.533 00:26:07.533 --- 10.0.0.1 ping statistics --- 00:26:07.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.533 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:26:07.533 01:33:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.533 01:33:59 -- nvmf/common.sh@410 -- # return 0 00:26:07.533 01:33:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:07.533 01:33:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.533 01:33:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:07.533 01:33:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:07.533 01:33:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.533 01:33:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:07.533 01:33:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:07.533 01:33:59 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:07.533 01:33:59 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:26:07.533 01:33:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:07.533 01:33:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:07.533 01:33:59 -- common/autotest_common.sh@10 -- # set +x 00:26:07.533 ************************************ 00:26:07.533 START TEST nvmf_digest_clean 00:26:07.533 ************************************ 00:26:07.533 01:33:59 -- common/autotest_common.sh@1104 -- # run_digest 00:26:07.533 01:33:59 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:26:07.533 01:33:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:07.533 01:33:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:07.533 01:33:59 -- common/autotest_common.sh@10 -- # set +x 00:26:07.533 01:33:59 -- nvmf/common.sh@469 -- # nvmfpid=743768 00:26:07.533 01:33:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:07.533 01:33:59 -- nvmf/common.sh@470 -- # waitforlisten 743768 00:26:07.533 01:33:59 -- common/autotest_common.sh@819 -- # '[' -z 743768 ']' 00:26:07.533 01:33:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.533 01:33:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:07.533 01:33:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.533 01:33:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:07.533 01:33:59 -- common/autotest_common.sh@10 -- # set +x 00:26:07.792 [2024-07-27 01:33:59.302856] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:07.792 [2024-07-27 01:33:59.302940] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.792 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.792 [2024-07-27 01:33:59.364042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.792 [2024-07-27 01:33:59.471987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:07.792 [2024-07-27 01:33:59.472168] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.792 [2024-07-27 01:33:59.472190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.792 [2024-07-27 01:33:59.472203] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.792 [2024-07-27 01:33:59.472233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.792 01:33:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:07.792 01:33:59 -- common/autotest_common.sh@852 -- # return 0 00:26:07.792 01:33:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:07.792 01:33:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:07.792 01:33:59 -- common/autotest_common.sh@10 -- # set +x 00:26:07.792 01:33:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.792 01:33:59 -- host/digest.sh@120 -- # common_target_config 00:26:07.792 01:33:59 -- host/digest.sh@43 -- # rpc_cmd 00:26:07.792 01:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.792 01:33:59 -- common/autotest_common.sh@10 -- # set +x 00:26:08.050 null0 00:26:08.050 [2024-07-27 01:33:59.640627] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.050 [2024-07-27 01:33:59.664875] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.050 01:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.050 01:33:59 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:26:08.050 01:33:59 -- host/digest.sh@77 -- # local rw bs qd 00:26:08.050 01:33:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:08.050 01:33:59 -- host/digest.sh@80 -- # rw=randread 00:26:08.050 01:33:59 -- host/digest.sh@80 -- # bs=4096 00:26:08.050 01:33:59 -- host/digest.sh@80 -- # qd=128 00:26:08.050 01:33:59 -- host/digest.sh@82 -- # bperfpid=743801 00:26:08.050 01:33:59 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:08.050 01:33:59 -- host/digest.sh@83 -- # waitforlisten 743801 /var/tmp/bperf.sock 00:26:08.050 01:33:59 -- common/autotest_common.sh@819 -- # '[' -z 743801 ']' 00:26:08.050 01:33:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:08.050 01:33:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:08.050 01:33:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:08.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:08.050 01:33:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:08.050 01:33:59 -- common/autotest_common.sh@10 -- # set +x 00:26:08.050 [2024-07-27 01:33:59.707791] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:08.050 [2024-07-27 01:33:59.707854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743801 ] 00:26:08.050 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.050 [2024-07-27 01:33:59.767822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.309 [2024-07-27 01:33:59.881559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.309 01:33:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:08.309 01:33:59 -- common/autotest_common.sh@852 -- # return 0 00:26:08.309 01:33:59 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:08.309 01:33:59 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:08.309 01:33:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:08.567 01:34:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.567 01:34:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.137 nvme0n1 00:26:09.137 01:34:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:09.137 01:34:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.137 Running I/O for 2 seconds... 00:26:11.038 00:26:11.038 Latency(us) 00:26:11.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.038 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:11.038 nvme0n1 : 2.00 17034.34 66.54 0.00 0.00 7504.24 3034.07 13204.29 00:26:11.038 =================================================================================================================== 00:26:11.038 Total : 17034.34 66.54 0.00 0.00 7504.24 3034.07 13204.29 00:26:11.038 0 00:26:11.038 01:34:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:11.038 01:34:02 -- host/digest.sh@92 -- # get_accel_stats 00:26:11.038 01:34:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:11.038 01:34:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:11.038 01:34:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:11.038 | select(.opcode=="crc32c") 00:26:11.038 | "\(.module_name) \(.executed)"' 00:26:11.296 01:34:03 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:11.296 01:34:03 -- host/digest.sh@93 -- # exp_module=software 00:26:11.296 01:34:03 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:11.296 01:34:03 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:11.296 01:34:03 -- host/digest.sh@97 -- # killprocess 743801 00:26:11.296 01:34:03 -- common/autotest_common.sh@926 -- # '[' -z 743801 ']' 00:26:11.296 01:34:03 -- common/autotest_common.sh@930 -- # kill -0 743801 00:26:11.296 01:34:03 -- common/autotest_common.sh@931 -- # uname 00:26:11.296 01:34:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:11.296 01:34:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 743801 00:26:11.296 01:34:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:11.296 01:34:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:11.296 01:34:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 743801' 00:26:11.296 killing process with pid 743801 00:26:11.296 01:34:03 -- common/autotest_common.sh@945 -- # kill 743801 00:26:11.296 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.296 00:26:11.296 Latency(us) 00:26:11.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.296 =================================================================================================================== 00:26:11.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.296 01:34:03 -- common/autotest_common.sh@950 -- # wait 743801 00:26:11.862 01:34:03 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:26:11.862 01:34:03 -- host/digest.sh@77 -- # local rw bs qd 00:26:11.862 01:34:03 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:11.862 01:34:03 -- host/digest.sh@80 -- # rw=randread 00:26:11.862 01:34:03 -- host/digest.sh@80 -- # bs=131072 00:26:11.862 01:34:03 -- host/digest.sh@80 -- # qd=16 00:26:11.862 01:34:03 -- host/digest.sh@82 -- # bperfpid=744225 00:26:11.862 01:34:03 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:11.862 01:34:03 -- host/digest.sh@83 -- # waitforlisten 744225 /var/tmp/bperf.sock 00:26:11.862 01:34:03 -- common/autotest_common.sh@819 -- # '[' -z 744225 ']' 00:26:11.862 01:34:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.862 01:34:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:11.862 01:34:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.862 01:34:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:11.862 01:34:03 -- common/autotest_common.sh@10 -- # set +x 00:26:11.862 [2024-07-27 01:34:03.354204] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:11.862 [2024-07-27 01:34:03.354283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744225 ] 00:26:11.862 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:11.862 Zero copy mechanism will not be used. 00:26:11.862 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.862 [2024-07-27 01:34:03.412501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.862 [2024-07-27 01:34:03.519951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.862 01:34:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:11.862 01:34:03 -- common/autotest_common.sh@852 -- # return 0 00:26:11.862 01:34:03 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:11.862 01:34:03 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:11.862 01:34:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:12.429 01:34:03 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.429 01:34:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.686 nvme0n1 00:26:12.686 01:34:04 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:12.686 01:34:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.945 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:12.945 Zero copy mechanism will not be used. 00:26:12.945 Running I/O for 2 seconds... 00:26:14.849 00:26:14.849 Latency(us) 00:26:14.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.849 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:14.849 nvme0n1 : 2.01 2568.29 321.04 0.00 0.00 6226.68 5679.79 9126.49 00:26:14.849 =================================================================================================================== 00:26:14.849 Total : 2568.29 321.04 0.00 0.00 6226.68 5679.79 9126.49 00:26:14.849 0 00:26:14.849 01:34:06 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:14.850 01:34:06 -- host/digest.sh@92 -- # get_accel_stats 00:26:14.850 01:34:06 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:14.850 01:34:06 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:14.850 | select(.opcode=="crc32c") 00:26:14.850 | "\(.module_name) \(.executed)"' 00:26:14.850 01:34:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:15.108 01:34:06 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:15.108 01:34:06 -- host/digest.sh@93 -- # exp_module=software 00:26:15.108 01:34:06 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:15.108 01:34:06 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:15.108 01:34:06 -- host/digest.sh@97 -- # killprocess 744225 00:26:15.108 01:34:06 -- common/autotest_common.sh@926 -- # '[' -z 744225 ']' 00:26:15.108 01:34:06 -- common/autotest_common.sh@930 -- # kill -0 744225 00:26:15.108 01:34:06 -- common/autotest_common.sh@931 -- # uname 00:26:15.108 01:34:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:15.108 01:34:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 744225 00:26:15.108 01:34:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:15.108 01:34:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:15.108 01:34:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 744225' 00:26:15.108 killing process with pid 744225 00:26:15.108 01:34:06 -- common/autotest_common.sh@945 -- # kill 744225 00:26:15.108 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.108 00:26:15.108 Latency(us) 00:26:15.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.108 =================================================================================================================== 00:26:15.108 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.108 01:34:06 -- common/autotest_common.sh@950 -- # wait 744225 00:26:15.366 01:34:07 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:26:15.366 01:34:07 -- host/digest.sh@77 -- # local rw bs qd 00:26:15.366 01:34:07 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:15.366 01:34:07 -- host/digest.sh@80 -- # rw=randwrite 00:26:15.366 01:34:07 -- host/digest.sh@80 -- # bs=4096 00:26:15.366 01:34:07 -- host/digest.sh@80 -- # qd=128 00:26:15.366 01:34:07 -- host/digest.sh@82 -- # bperfpid=744761 00:26:15.366 01:34:07 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:15.366 01:34:07 -- host/digest.sh@83 -- # waitforlisten 744761 /var/tmp/bperf.sock 00:26:15.366 01:34:07 -- common/autotest_common.sh@819 -- # '[' -z 744761 ']' 00:26:15.366 01:34:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.366 01:34:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:15.366 01:34:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.366 01:34:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:15.366 01:34:07 -- common/autotest_common.sh@10 -- # set +x 00:26:15.366 [2024-07-27 01:34:07.086532] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:15.366 [2024-07-27 01:34:07.086619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744761 ] 00:26:15.366 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.624 [2024-07-27 01:34:07.145555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.624 [2024-07-27 01:34:07.251418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.624 01:34:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:15.624 01:34:07 -- common/autotest_common.sh@852 -- # return 0 00:26:15.624 01:34:07 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:15.624 01:34:07 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:15.624 01:34:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.882 01:34:07 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.882 01:34:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.450 nvme0n1 00:26:16.450 01:34:08 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:16.450 01:34:08 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:16.450 Running I/O for 2 seconds... 00:26:18.989 00:26:18.989 Latency(us) 00:26:18.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.989 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:18.989 nvme0n1 : 2.00 21185.53 82.76 0.00 0.00 6033.70 3106.89 12039.21 00:26:18.989 =================================================================================================================== 00:26:18.989 Total : 21185.53 82.76 0.00 0.00 6033.70 3106.89 12039.21 00:26:18.989 0 00:26:18.989 01:34:10 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:18.989 01:34:10 -- host/digest.sh@92 -- # get_accel_stats 00:26:18.989 01:34:10 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:18.989 01:34:10 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:18.989 | select(.opcode=="crc32c") 00:26:18.989 | "\(.module_name) \(.executed)"' 00:26:18.989 01:34:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:18.989 01:34:10 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:18.989 01:34:10 -- host/digest.sh@93 -- # exp_module=software 00:26:18.989 01:34:10 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:18.989 01:34:10 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:18.989 01:34:10 -- host/digest.sh@97 -- # killprocess 744761 00:26:18.989 01:34:10 -- common/autotest_common.sh@926 -- # '[' -z 744761 ']' 00:26:18.989 01:34:10 -- common/autotest_common.sh@930 -- # kill -0 744761 00:26:18.989 01:34:10 -- common/autotest_common.sh@931 -- # uname 00:26:18.989 01:34:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:18.989 01:34:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 744761 00:26:18.989 01:34:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:18.989 01:34:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:18.989 01:34:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 744761' 00:26:18.989 killing process with pid 744761 00:26:18.989 01:34:10 -- common/autotest_common.sh@945 -- # kill 744761 00:26:18.989 Received shutdown signal, test time was about 2.000000 seconds 00:26:18.989 00:26:18.989 Latency(us) 00:26:18.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.989 =================================================================================================================== 00:26:18.989 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.989 01:34:10 -- common/autotest_common.sh@950 -- # wait 744761 00:26:18.989 01:34:10 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:26:18.989 01:34:10 -- host/digest.sh@77 -- # local rw bs qd 00:26:18.989 01:34:10 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.989 01:34:10 -- host/digest.sh@80 -- # rw=randwrite 00:26:18.989 01:34:10 -- host/digest.sh@80 -- # bs=131072 00:26:18.989 01:34:10 -- host/digest.sh@80 -- # qd=16 00:26:18.989 01:34:10 -- host/digest.sh@82 -- # bperfpid=745183 00:26:18.989 01:34:10 -- host/digest.sh@83 -- # waitforlisten 745183 /var/tmp/bperf.sock 00:26:18.989 01:34:10 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:18.989 01:34:10 -- common/autotest_common.sh@819 -- # '[' -z 745183 ']' 00:26:18.989 01:34:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.989 01:34:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:18.989 01:34:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.989 01:34:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:18.989 01:34:10 -- common/autotest_common.sh@10 -- # set +x 00:26:19.247 [2024-07-27 01:34:10.782670] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:19.247 [2024-07-27 01:34:10.782768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745183 ] 00:26:19.247 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.247 Zero copy mechanism will not be used. 00:26:19.247 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.247 [2024-07-27 01:34:10.847680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.247 [2024-07-27 01:34:10.960413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.219 01:34:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:20.219 01:34:11 -- common/autotest_common.sh@852 -- # return 0 00:26:20.219 01:34:11 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:20.219 01:34:11 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:20.219 01:34:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:20.477 01:34:12 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.477 01:34:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.734 nvme0n1 00:26:20.734 01:34:12 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:20.734 01:34:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:20.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:20.734 Zero copy mechanism will not be used. 00:26:20.734 Running I/O for 2 seconds... 00:26:23.265 00:26:23.265 Latency(us) 00:26:23.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.265 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:23.265 nvme0n1 : 2.01 1541.11 192.64 0.00 0.00 10351.66 3422.44 15825.73 00:26:23.265 =================================================================================================================== 00:26:23.265 Total : 1541.11 192.64 0.00 0.00 10351.66 3422.44 15825.73 00:26:23.265 0 00:26:23.265 01:34:14 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:23.265 01:34:14 -- host/digest.sh@92 -- # get_accel_stats 00:26:23.265 01:34:14 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:23.265 01:34:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:23.265 01:34:14 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:23.265 | select(.opcode=="crc32c") 00:26:23.265 | "\(.module_name) \(.executed)"' 00:26:23.265 01:34:14 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:23.265 01:34:14 -- host/digest.sh@93 -- # exp_module=software 00:26:23.265 01:34:14 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:23.265 01:34:14 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:23.265 01:34:14 -- host/digest.sh@97 -- # killprocess 745183 00:26:23.265 01:34:14 -- common/autotest_common.sh@926 -- # '[' -z 745183 ']' 00:26:23.265 01:34:14 -- common/autotest_common.sh@930 -- # kill -0 745183 00:26:23.265 01:34:14 -- common/autotest_common.sh@931 -- # uname 00:26:23.265 01:34:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:23.265 01:34:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 745183 00:26:23.265 01:34:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:23.265 01:34:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:23.265 01:34:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 745183' 00:26:23.265 killing process with pid 745183 00:26:23.265 01:34:14 -- common/autotest_common.sh@945 -- # kill 745183 00:26:23.265 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.265 00:26:23.265 Latency(us) 00:26:23.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.266 =================================================================================================================== 00:26:23.266 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.266 01:34:14 -- common/autotest_common.sh@950 -- # wait 745183 00:26:23.266 01:34:14 -- host/digest.sh@126 -- # killprocess 743768 00:26:23.266 01:34:14 -- common/autotest_common.sh@926 -- # '[' -z 743768 ']' 00:26:23.266 01:34:14 -- common/autotest_common.sh@930 -- # kill -0 743768 00:26:23.266 01:34:14 -- common/autotest_common.sh@931 -- # uname 00:26:23.266 01:34:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:23.266 01:34:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 743768 00:26:23.266 01:34:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:23.266 01:34:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:23.266 01:34:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 743768' 00:26:23.266 killing process with pid 743768 00:26:23.266 01:34:15 -- common/autotest_common.sh@945 -- # kill 743768 00:26:23.266 01:34:15 -- common/autotest_common.sh@950 -- # wait 743768 00:26:23.834 00:26:23.834 real 0m16.038s 00:26:23.834 user 0m32.127s 00:26:23.834 sys 0m3.998s 00:26:23.834 01:34:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.834 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.834 ************************************ 00:26:23.834 END TEST nvmf_digest_clean 00:26:23.834 ************************************ 00:26:23.834 01:34:15 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:26:23.834 01:34:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:23.834 01:34:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:23.834 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.834 ************************************ 00:26:23.834 START TEST nvmf_digest_error 00:26:23.834 ************************************ 00:26:23.834 01:34:15 -- common/autotest_common.sh@1104 -- # run_digest_error 00:26:23.835 01:34:15 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:26:23.835 01:34:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:23.835 01:34:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:23.835 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.835 01:34:15 -- nvmf/common.sh@469 -- # nvmfpid=745759 00:26:23.835 01:34:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:23.835 01:34:15 -- nvmf/common.sh@470 -- # waitforlisten 745759 00:26:23.835 01:34:15 -- common/autotest_common.sh@819 -- # '[' -z 745759 ']' 00:26:23.835 01:34:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.835 01:34:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:23.835 01:34:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.835 01:34:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:23.835 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.835 [2024-07-27 01:34:15.373486] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:23.835 [2024-07-27 01:34:15.373571] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.835 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.835 [2024-07-27 01:34:15.440076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.835 [2024-07-27 01:34:15.558670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:23.835 [2024-07-27 01:34:15.558841] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.835 [2024-07-27 01:34:15.558862] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.835 [2024-07-27 01:34:15.558876] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.835 [2024-07-27 01:34:15.558910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.094 01:34:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:24.094 01:34:15 -- common/autotest_common.sh@852 -- # return 0 00:26:24.094 01:34:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:24.094 01:34:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:24.094 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:26:24.094 01:34:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.094 01:34:15 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:24.094 01:34:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:24.094 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:26:24.094 [2024-07-27 01:34:15.635569] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:24.094 01:34:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:24.094 01:34:15 -- host/digest.sh@104 -- # common_target_config 00:26:24.094 01:34:15 -- host/digest.sh@43 -- # rpc_cmd 00:26:24.094 01:34:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:24.094 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:26:24.094 null0 00:26:24.094 [2024-07-27 01:34:15.754085] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.094 [2024-07-27 01:34:15.778350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.094 01:34:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:24.094 01:34:15 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:26:24.094 01:34:15 -- host/digest.sh@54 -- # local rw bs qd 00:26:24.094 01:34:15 -- host/digest.sh@56 -- # rw=randread 00:26:24.094 01:34:15 -- host/digest.sh@56 -- # bs=4096 00:26:24.094 01:34:15 -- host/digest.sh@56 -- # qd=128 00:26:24.094 01:34:15 -- host/digest.sh@58 -- # bperfpid=745789 00:26:24.094 01:34:15 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:24.094 01:34:15 -- host/digest.sh@60 -- # waitforlisten 745789 /var/tmp/bperf.sock 00:26:24.094 01:34:15 -- common/autotest_common.sh@819 -- # '[' -z 745789 ']' 00:26:24.094 01:34:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.094 01:34:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:24.094 01:34:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.094 01:34:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:24.094 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:26:24.094 [2024-07-27 01:34:15.820419] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:24.094 [2024-07-27 01:34:15.820481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745789 ] 00:26:24.094 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.353 [2024-07-27 01:34:15.881827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.353 [2024-07-27 01:34:15.997283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.286 01:34:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:25.286 01:34:16 -- common/autotest_common.sh@852 -- # return 0 00:26:25.286 01:34:16 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.286 01:34:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.286 01:34:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:25.286 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.286 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:26:25.546 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.546 01:34:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.546 01:34:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.804 nvme0n1 00:26:25.804 01:34:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:25.804 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.804 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:26:25.804 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.804 01:34:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:25.804 01:34:17 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.063 Running I/O for 2 seconds... 00:26:26.063 [2024-07-27 01:34:17.621280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.063 [2024-07-27 01:34:17.621328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.063 [2024-07-27 01:34:17.621372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.063 [2024-07-27 01:34:17.643248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.063 [2024-07-27 01:34:17.643279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.063 [2024-07-27 01:34:17.643296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.063 [2024-07-27 01:34:17.665002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.063 [2024-07-27 01:34:17.665040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.063 [2024-07-27 01:34:17.665067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.063 [2024-07-27 01:34:17.686346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.063 [2024-07-27 01:34:17.686396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.063 [2024-07-27 01:34:17.686416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.064 [2024-07-27 01:34:17.708226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.064 [2024-07-27 01:34:17.708256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-07-27 01:34:17.708272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.064 [2024-07-27 01:34:17.730066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.064 [2024-07-27 01:34:17.730115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-07-27 01:34:17.730131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.064 [2024-07-27 01:34:17.751999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.064 [2024-07-27 01:34:17.752035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-07-27 01:34:17.752055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.064 [2024-07-27 01:34:17.773958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.064 [2024-07-27 01:34:17.773995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-07-27 01:34:17.774015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.064 [2024-07-27 01:34:17.795363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.064 [2024-07-27 01:34:17.795413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-07-27 01:34:17.795433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.064 [2024-07-27 01:34:17.817071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.064 [2024-07-27 01:34:17.817106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.064 [2024-07-27 01:34:17.817140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:17.839065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:17.839114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:17.839130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:17.860919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:17.860955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:17.860975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:17.882393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:17.882444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:17.882465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:17.904014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:17.904051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:17.904080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:17.926191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:17.926224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:17.926242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:17.947953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:17.947989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:17.948009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:17.969279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:17.969310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:17.969327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:17.991220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:17.991254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:17.991277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:18.012951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:18.012987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:18.013007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:18.033490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:18.033520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:18.033537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:18.053819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:18.053850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:18.053866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.324 [2024-07-27 01:34:18.073671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.324 [2024-07-27 01:34:18.073702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.324 [2024-07-27 01:34:18.073718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.093819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.093850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.093867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.114110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.114142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.114159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.134271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.134302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.134319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.154710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.154742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.154759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.174348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.174402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.174419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.194455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.194485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.194501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.214598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.214629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.214646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.234315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.234346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.234363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.254677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.254709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.254725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.274711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.274741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.274758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.295003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.295034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.295072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.316240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.316271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.316287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.584 [2024-07-27 01:34:18.336635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.584 [2024-07-27 01:34:18.336665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.584 [2024-07-27 01:34:18.336682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.356744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.356775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.356791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.376989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.377020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.377036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.397000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.397030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.397066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.417030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.417083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.417101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.436685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.436715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.436731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.456870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.456900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.456917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.476993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.477023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.477054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.496838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.496870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.496887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.517169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.517211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.517237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.537141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.537172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.537189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.557622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.557652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.557669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.577605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.577635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.577651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.845 [2024-07-27 01:34:18.597833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:26.845 [2024-07-27 01:34:18.597881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.845 [2024-07-27 01:34:18.597899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.617949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.617980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.617996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.638107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.638137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.638154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.658180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.658211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.658228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.678475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.678506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.678522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.698516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.698556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.698573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.718770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.718800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.718817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.739007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.739053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.739079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.759177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.759207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.759224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.778644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.778674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.778690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.798974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.799004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.799021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.819321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.819352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.819384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.838703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.838733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.838750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.106 [2024-07-27 01:34:18.859204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.106 [2024-07-27 01:34:18.859236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.106 [2024-07-27 01:34:18.859280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:18.879163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:18.879195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:18.879212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:18.900997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:18.901034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:18.901055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:18.932877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:18.932913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:18.932933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:18.954328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:18.954373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:18.954389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:18.976187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:18.976216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:18.976233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:18.997391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:18.997427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:18.997447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:19.019147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:19.019177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:19.019193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:19.041108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:19.041138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:19.041154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:19.062661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:19.062704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:19.062725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:19.084429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:19.084466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:19.084486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.367 [2024-07-27 01:34:19.106205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.367 [2024-07-27 01:34:19.106234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.367 [2024-07-27 01:34:19.106251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-07-27 01:34:19.127960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.626 [2024-07-27 01:34:19.127997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-07-27 01:34:19.128017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-07-27 01:34:19.149330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.626 [2024-07-27 01:34:19.149377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-07-27 01:34:19.149397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-07-27 01:34:19.170956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.626 [2024-07-27 01:34:19.170993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-07-27 01:34:19.171013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-07-27 01:34:19.192845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.626 [2024-07-27 01:34:19.192882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-07-27 01:34:19.192901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.626 [2024-07-27 01:34:19.214714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.626 [2024-07-27 01:34:19.214751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.626 [2024-07-27 01:34:19.214771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.627 [2024-07-27 01:34:19.236563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.627 [2024-07-27 01:34:19.236602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.627 [2024-07-27 01:34:19.236622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.627 [2024-07-27 01:34:19.258292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.627 [2024-07-27 01:34:19.258323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.627 [2024-07-27 01:34:19.258340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.627 [2024-07-27 01:34:19.279335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.627 [2024-07-27 01:34:19.279366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.627 [2024-07-27 01:34:19.279383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.627 [2024-07-27 01:34:19.300928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.627 [2024-07-27 01:34:19.300965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.627 [2024-07-27 01:34:19.300985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.627 [2024-07-27 01:34:19.322318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.627 [2024-07-27 01:34:19.322362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.627 [2024-07-27 01:34:19.322378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.627 [2024-07-27 01:34:19.343392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.627 [2024-07-27 01:34:19.343429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.627 [2024-07-27 01:34:19.343449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.627 [2024-07-27 01:34:19.364758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.627 [2024-07-27 01:34:19.364795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.627 [2024-07-27 01:34:19.364816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.386735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.386772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.386792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.408691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.408729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.408749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.430540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.430579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.430605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.452799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.452836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.452856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.474356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.474404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.474424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.496316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.496364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.496384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.517688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.517726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.517745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.539530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.539567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.539587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.561365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.561395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.561429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 [2024-07-27 01:34:19.583009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18f0f00) 00:26:27.887 [2024-07-27 01:34:19.583046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.887 [2024-07-27 01:34:19.583073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.887 00:26:27.887 Latency(us) 00:26:27.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.887 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:27.887 nvme0n1 : 2.05 11811.45 46.14 0.00 0.00 10611.76 9417.77 62914.56 00:26:27.887 =================================================================================================================== 00:26:27.887 Total : 11811.45 46.14 0.00 0.00 10611.76 9417.77 62914.56 00:26:27.887 0 00:26:28.147 01:34:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:28.147 01:34:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:28.147 01:34:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:28.147 01:34:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:28.147 | .driver_specific 00:26:28.147 | .nvme_error 00:26:28.147 | .status_code 00:26:28.147 | .command_transient_transport_error' 00:26:28.147 01:34:19 -- host/digest.sh@71 -- # (( 94 > 0 )) 00:26:28.147 01:34:19 -- host/digest.sh@73 -- # killprocess 745789 00:26:28.147 01:34:19 -- common/autotest_common.sh@926 -- # '[' -z 745789 ']' 00:26:28.147 01:34:19 -- common/autotest_common.sh@930 -- # kill -0 745789 00:26:28.147 01:34:19 -- common/autotest_common.sh@931 -- # uname 00:26:28.147 01:34:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:28.147 01:34:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 745789 00:26:28.407 01:34:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:28.407 01:34:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:28.407 01:34:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 745789' 00:26:28.407 killing process with pid 745789 00:26:28.407 01:34:19 -- common/autotest_common.sh@945 -- # kill 745789 00:26:28.407 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.407 00:26:28.407 Latency(us) 00:26:28.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.407 =================================================================================================================== 00:26:28.407 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.407 01:34:19 -- common/autotest_common.sh@950 -- # wait 745789 00:26:28.667 01:34:20 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:26:28.667 01:34:20 -- host/digest.sh@54 -- # local rw bs qd 00:26:28.667 01:34:20 -- host/digest.sh@56 -- # rw=randread 00:26:28.667 01:34:20 -- host/digest.sh@56 -- # bs=131072 00:26:28.667 01:34:20 -- host/digest.sh@56 -- # qd=16 00:26:28.667 01:34:20 -- host/digest.sh@58 -- # bperfpid=746337 00:26:28.667 01:34:20 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:28.667 01:34:20 -- host/digest.sh@60 -- # waitforlisten 746337 /var/tmp/bperf.sock 00:26:28.667 01:34:20 -- common/autotest_common.sh@819 -- # '[' -z 746337 ']' 00:26:28.667 01:34:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.667 01:34:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.667 01:34:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.667 01:34:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.667 01:34:20 -- common/autotest_common.sh@10 -- # set +x 00:26:28.667 [2024-07-27 01:34:20.228280] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:28.667 [2024-07-27 01:34:20.228364] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746337 ] 00:26:28.667 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.667 Zero copy mechanism will not be used. 00:26:28.667 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.667 [2024-07-27 01:34:20.288022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.667 [2024-07-27 01:34:20.394206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.603 01:34:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.603 01:34:21 -- common/autotest_common.sh@852 -- # return 0 00:26:29.603 01:34:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.603 01:34:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.861 01:34:21 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:29.861 01:34:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.861 01:34:21 -- common/autotest_common.sh@10 -- # set +x 00:26:29.861 01:34:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.861 01:34:21 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.861 01:34:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:30.119 nvme0n1 00:26:30.119 01:34:21 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:30.119 01:34:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.119 01:34:21 -- common/autotest_common.sh@10 -- # set +x 00:26:30.119 01:34:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.119 01:34:21 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:30.119 01:34:21 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:30.379 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:30.379 Zero copy mechanism will not be used. 00:26:30.379 Running I/O for 2 seconds... 00:26:30.379 [2024-07-27 01:34:21.955361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:21.955441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:21.955462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:21.973634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:21.973669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:21.973687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:21.990836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:21.990866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:21.990884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:22.008607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:22.008639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:22.008656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:22.026326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:22.026358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:22.026392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:22.045170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:22.045202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:22.045220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:22.063921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:22.063953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:22.063970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:22.079192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:22.079224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:22.079243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:22.097655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:22.097686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:22.097702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:22.111270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:22.111303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:22.111320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.379 [2024-07-27 01:34:22.130410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.379 [2024-07-27 01:34:22.130443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.379 [2024-07-27 01:34:22.130461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.149183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.149216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.149234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.166271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.166303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.166320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.182228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.182261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.182278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.199422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.199461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.199479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.215633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.215664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.215681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.233569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.233600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.233617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.250128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.250160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.250177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.266178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.266211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.266228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.281489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.281521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.281537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.298316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.298348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.298365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.314828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.314860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.314876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.330429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.330461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.330478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.346622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.346653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.346670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.362930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.362962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.362979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.378195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.378227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.378245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.640 [2024-07-27 01:34:22.396538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.640 [2024-07-27 01:34:22.396570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.640 [2024-07-27 01:34:22.396587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.900 [2024-07-27 01:34:22.412049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.900 [2024-07-27 01:34:22.412106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.900 [2024-07-27 01:34:22.412125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.900 [2024-07-27 01:34:22.428823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.900 [2024-07-27 01:34:22.428854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.900 [2024-07-27 01:34:22.428870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.900 [2024-07-27 01:34:22.446701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.900 [2024-07-27 01:34:22.446733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.900 [2024-07-27 01:34:22.446749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.900 [2024-07-27 01:34:22.463170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.900 [2024-07-27 01:34:22.463202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.900 [2024-07-27 01:34:22.463219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.900 [2024-07-27 01:34:22.480170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.900 [2024-07-27 01:34:22.480202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.900 [2024-07-27 01:34:22.480228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.900 [2024-07-27 01:34:22.495191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.900 [2024-07-27 01:34:22.495239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.900 [2024-07-27 01:34:22.495256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.900 [2024-07-27 01:34:22.511955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.900 [2024-07-27 01:34:22.511986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.900 [2024-07-27 01:34:22.512003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.900 [2024-07-27 01:34:22.527488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.900 [2024-07-27 01:34:22.527520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.900 [2024-07-27 01:34:22.527537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.900 [2024-07-27 01:34:22.543994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.900 [2024-07-27 01:34:22.544026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.901 [2024-07-27 01:34:22.544066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.901 [2024-07-27 01:34:22.561241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.901 [2024-07-27 01:34:22.561281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.901 [2024-07-27 01:34:22.561299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.901 [2024-07-27 01:34:22.577759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.901 [2024-07-27 01:34:22.577804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.901 [2024-07-27 01:34:22.577821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.901 [2024-07-27 01:34:22.594970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.901 [2024-07-27 01:34:22.595001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.901 [2024-07-27 01:34:22.595018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.901 [2024-07-27 01:34:22.612014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.901 [2024-07-27 01:34:22.612046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.901 [2024-07-27 01:34:22.612084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.901 [2024-07-27 01:34:22.627969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.901 [2024-07-27 01:34:22.628016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.901 [2024-07-27 01:34:22.628038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.901 [2024-07-27 01:34:22.642171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.901 [2024-07-27 01:34:22.642218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.901 [2024-07-27 01:34:22.642236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.901 [2024-07-27 01:34:22.657660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:30.901 [2024-07-27 01:34:22.657696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.657716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.673458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.673497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.673518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.687882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.687920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.687940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.703721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.703759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.703779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.718668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.718706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.718725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.734895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.734933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.734953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.750512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.750550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.750570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.766258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.766293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.766319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.781930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.781967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.781987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.795629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.795665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.795684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.809505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.809542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.809562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.826325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.826367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.826384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.840809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.840846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.840866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.856011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.856047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.856075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.161 [2024-07-27 01:34:22.871908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.161 [2024-07-27 01:34:22.871945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.161 [2024-07-27 01:34:22.871965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.162 [2024-07-27 01:34:22.887832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.162 [2024-07-27 01:34:22.887878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.162 [2024-07-27 01:34:22.887898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.162 [2024-07-27 01:34:22.904289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.162 [2024-07-27 01:34:22.904323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.162 [2024-07-27 01:34:22.904341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.422 [2024-07-27 01:34:22.919920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.422 [2024-07-27 01:34:22.919958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.422 [2024-07-27 01:34:22.919978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.422 [2024-07-27 01:34:22.935578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.422 [2024-07-27 01:34:22.935614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.422 [2024-07-27 01:34:22.935634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.422 [2024-07-27 01:34:22.951736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.422 [2024-07-27 01:34:22.951773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.422 [2024-07-27 01:34:22.951793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.422 [2024-07-27 01:34:22.967376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.422 [2024-07-27 01:34:22.967414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.422 [2024-07-27 01:34:22.967434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.422 [2024-07-27 01:34:22.982254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.422 [2024-07-27 01:34:22.982288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.422 [2024-07-27 01:34:22.982306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.422 [2024-07-27 01:34:22.998381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.422 [2024-07-27 01:34:22.998432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.422 [2024-07-27 01:34:22.998453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.422 [2024-07-27 01:34:23.014024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.422 [2024-07-27 01:34:23.014070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.422 [2024-07-27 01:34:23.014118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.422 [2024-07-27 01:34:23.029815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.422 [2024-07-27 01:34:23.029852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.422 [2024-07-27 01:34:23.029872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.422 [2024-07-27 01:34:23.046041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.422 [2024-07-27 01:34:23.046101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.422 [2024-07-27 01:34:23.046120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.423 [2024-07-27 01:34:23.061337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.423 [2024-07-27 01:34:23.061383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.423 [2024-07-27 01:34:23.061401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.423 [2024-07-27 01:34:23.077603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.423 [2024-07-27 01:34:23.077639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.423 [2024-07-27 01:34:23.077659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.423 [2024-07-27 01:34:23.092391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.423 [2024-07-27 01:34:23.092441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.423 [2024-07-27 01:34:23.092461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.423 [2024-07-27 01:34:23.107427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.423 [2024-07-27 01:34:23.107463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.423 [2024-07-27 01:34:23.107483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.423 [2024-07-27 01:34:23.122329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.423 [2024-07-27 01:34:23.122360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.423 [2024-07-27 01:34:23.122393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.423 [2024-07-27 01:34:23.138339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.423 [2024-07-27 01:34:23.138390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.423 [2024-07-27 01:34:23.138410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.423 [2024-07-27 01:34:23.152732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.423 [2024-07-27 01:34:23.152769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.423 [2024-07-27 01:34:23.152795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.423 [2024-07-27 01:34:23.167871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.423 [2024-07-27 01:34:23.167907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.423 [2024-07-27 01:34:23.167927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.182764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.182802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.182821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.198083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.198130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.198148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.214389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.214438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.214458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.229916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.229952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.229972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.245386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.245437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.245458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.262466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.262502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.262522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.277483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.277520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.277540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.292198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.292236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.292255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.307379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.307410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.307445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.322264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.322296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.322313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.338131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.338163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.338181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.352858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.352894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.352914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.368322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.368354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.368387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.383364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.383413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.383433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.397787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.397823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.397843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.413099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.413130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.413152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.682 [2024-07-27 01:34:23.428962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.682 [2024-07-27 01:34:23.428998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.682 [2024-07-27 01:34:23.429017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.940 [2024-07-27 01:34:23.443556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.940 [2024-07-27 01:34:23.443594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-07-27 01:34:23.443614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.940 [2024-07-27 01:34:23.459241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.940 [2024-07-27 01:34:23.459272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-07-27 01:34:23.459289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.940 [2024-07-27 01:34:23.474585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.940 [2024-07-27 01:34:23.474622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-07-27 01:34:23.474642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.940 [2024-07-27 01:34:23.488609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.940 [2024-07-27 01:34:23.488645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-07-27 01:34:23.488664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.940 [2024-07-27 01:34:23.504655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.940 [2024-07-27 01:34:23.504692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-07-27 01:34:23.504711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.940 [2024-07-27 01:34:23.519294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.940 [2024-07-27 01:34:23.519326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-07-27 01:34:23.519344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.940 [2024-07-27 01:34:23.534748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.940 [2024-07-27 01:34:23.534784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-07-27 01:34:23.534803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.550280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.550329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.550347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.566018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.566055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.566087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.582088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.582141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.582159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.598132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.598163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.598181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.612604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.612640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.612660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.627015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.627051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.627080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.642654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.642690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.642710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.657869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.657905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.657925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.673711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.673747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.673767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:31.941 [2024-07-27 01:34:23.688589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:31.941 [2024-07-27 01:34:23.688625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-07-27 01:34:23.688645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.704727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.704765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.704785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.719890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.719926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.719945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.735842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.735878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.735897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.751515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.751551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.751571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.767231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.767264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.767281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.782613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.782650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.782670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.800230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.800263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.800281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.816561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.816598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.816624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.832467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.832504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.832523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.848318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.848351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.848384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.863815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.863852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.863872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.878763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.878801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.878821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.894831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.894867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.894888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.911074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.911134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.911151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.926808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.926844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.926864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:32.199 [2024-07-27 01:34:23.942812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2372880) 00:26:32.199 [2024-07-27 01:34:23.942848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.199 [2024-07-27 01:34:23.942868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:32.456 00:26:32.456 Latency(us) 00:26:32.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.456 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:32.456 nvme0n1 : 2.05 1907.21 238.40 0.00 0.00 8225.99 6359.42 49321.91 00:26:32.456 =================================================================================================================== 00:26:32.456 Total : 1907.21 238.40 0.00 0.00 8225.99 6359.42 49321.91 00:26:32.456 0 00:26:32.456 01:34:23 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:32.456 01:34:23 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:32.456 01:34:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:32.456 | .driver_specific 00:26:32.456 | .nvme_error 00:26:32.456 | .status_code 00:26:32.456 | .command_transient_transport_error' 00:26:32.456 01:34:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:32.715 01:34:24 -- host/digest.sh@71 -- # (( 126 > 0 )) 00:26:32.715 01:34:24 -- host/digest.sh@73 -- # killprocess 746337 00:26:32.715 01:34:24 -- common/autotest_common.sh@926 -- # '[' -z 746337 ']' 00:26:32.715 01:34:24 -- common/autotest_common.sh@930 -- # kill -0 746337 00:26:32.715 01:34:24 -- common/autotest_common.sh@931 -- # uname 00:26:32.715 01:34:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:32.715 01:34:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 746337 00:26:32.715 01:34:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:32.715 01:34:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:32.715 01:34:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 746337' 00:26:32.715 killing process with pid 746337 00:26:32.715 01:34:24 -- common/autotest_common.sh@945 -- # kill 746337 00:26:32.715 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.715 00:26:32.715 Latency(us) 00:26:32.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.715 =================================================================================================================== 00:26:32.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.715 01:34:24 -- common/autotest_common.sh@950 -- # wait 746337 00:26:32.973 01:34:24 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:26:32.973 01:34:24 -- host/digest.sh@54 -- # local rw bs qd 00:26:32.973 01:34:24 -- host/digest.sh@56 -- # rw=randwrite 00:26:32.973 01:34:24 -- host/digest.sh@56 -- # bs=4096 00:26:32.973 01:34:24 -- host/digest.sh@56 -- # qd=128 00:26:32.973 01:34:24 -- host/digest.sh@58 -- # bperfpid=746887 00:26:32.973 01:34:24 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:32.973 01:34:24 -- host/digest.sh@60 -- # waitforlisten 746887 /var/tmp/bperf.sock 00:26:32.973 01:34:24 -- common/autotest_common.sh@819 -- # '[' -z 746887 ']' 00:26:32.973 01:34:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.973 01:34:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:32.973 01:34:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.973 01:34:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:32.973 01:34:24 -- common/autotest_common.sh@10 -- # set +x 00:26:32.973 [2024-07-27 01:34:24.577897] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:32.973 [2024-07-27 01:34:24.577967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746887 ] 00:26:32.973 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.973 [2024-07-27 01:34:24.636736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.231 [2024-07-27 01:34:24.742527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.799 01:34:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:33.799 01:34:25 -- common/autotest_common.sh@852 -- # return 0 00:26:33.799 01:34:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:33.799 01:34:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.365 01:34:25 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:34.365 01:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.365 01:34:25 -- common/autotest_common.sh@10 -- # set +x 00:26:34.365 01:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.365 01:34:25 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.365 01:34:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.623 nvme0n1 00:26:34.623 01:34:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:34.623 01:34:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.623 01:34:26 -- common/autotest_common.sh@10 -- # set +x 00:26:34.623 01:34:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.623 01:34:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:34.623 01:34:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:34.909 Running I/O for 2 seconds... 00:26:34.909 [2024-07-27 01:34:26.465409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ecc78 00:26:34.909 [2024-07-27 01:34:26.466713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.466758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.477973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed4e8 00:26:34.909 [2024-07-27 01:34:26.479261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.479292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.490544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed4e8 00:26:34.909 [2024-07-27 01:34:26.491848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.491882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.503025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed4e8 00:26:34.909 [2024-07-27 01:34:26.504341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.504386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.515546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed4e8 00:26:34.909 [2024-07-27 01:34:26.516855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.516888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.528022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed4e8 00:26:34.909 [2024-07-27 01:34:26.529366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.529396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.540460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed4e8 00:26:34.909 [2024-07-27 01:34:26.541826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.541855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.553204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f46d0 00:26:34.909 [2024-07-27 01:34:26.554419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.554454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.566096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f6890 00:26:34.909 [2024-07-27 01:34:26.566628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.566662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.578811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:34.909 [2024-07-27 01:34:26.580054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.580097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.591189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:34.909 [2024-07-27 01:34:26.592454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.592490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.603734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:34.909 [2024-07-27 01:34:26.604996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.605028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.616266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:34.909 [2024-07-27 01:34:26.617520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.617553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.628723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:34.909 [2024-07-27 01:34:26.629956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.629989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:34.909 [2024-07-27 01:34:26.641285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:34.909 [2024-07-27 01:34:26.642633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.909 [2024-07-27 01:34:26.642666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.653836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:35.170 [2024-07-27 01:34:26.655136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.655166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.666340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:35.170 [2024-07-27 01:34:26.667685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.667718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.678750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:35.170 [2024-07-27 01:34:26.680070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.680115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.691136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:35.170 [2024-07-27 01:34:26.692548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.692581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.703674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:35.170 [2024-07-27 01:34:26.705050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.705091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.716171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:35.170 [2024-07-27 01:34:26.717653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.717687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.728601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:35.170 [2024-07-27 01:34:26.729963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.729996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.741008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:35.170 [2024-07-27 01:34:26.742452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.742489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.753607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9168 00:26:35.170 [2024-07-27 01:34:26.754991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.755023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.765972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190edd58 00:26:35.170 [2024-07-27 01:34:26.767529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.767561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.778575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f7da8 00:26:35.170 [2024-07-27 01:34:26.780003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.780035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.790873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f96f8 00:26:35.170 [2024-07-27 01:34:26.792343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.792371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.803443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e8d30 00:26:35.170 [2024-07-27 01:34:26.804884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.804916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.815869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed4e8 00:26:35.170 [2024-07-27 01:34:26.817310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.817351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.828233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f8a50 00:26:35.170 [2024-07-27 01:34:26.829722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.829755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.839252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f92c0 00:26:35.170 [2024-07-27 01:34:26.840261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.840303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.851726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e8088 00:26:35.170 [2024-07-27 01:34:26.852747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.852779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.864154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f8a50 00:26:35.170 [2024-07-27 01:34:26.865177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.865219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.876513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ee190 00:26:35.170 [2024-07-27 01:34:26.877620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.877653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.889146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f9b30 00:26:35.170 [2024-07-27 01:34:26.890207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.890234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:35.170 [2024-07-27 01:34:26.901496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.170 [2024-07-27 01:34:26.902616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.170 [2024-07-27 01:34:26.902647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:35.171 [2024-07-27 01:34:26.914022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.171 [2024-07-27 01:34:26.915174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.171 [2024-07-27 01:34:26.915212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:35.171 [2024-07-27 01:34:26.926467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.429 [2024-07-27 01:34:26.927609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.429 [2024-07-27 01:34:26.927643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:35.429 [2024-07-27 01:34:26.939007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.429 [2024-07-27 01:34:26.940135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.429 [2024-07-27 01:34:26.940162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:35.429 [2024-07-27 01:34:26.951339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.429 [2024-07-27 01:34:26.952500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.429 [2024-07-27 01:34:26.952532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:35.429 [2024-07-27 01:34:26.963865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.429 [2024-07-27 01:34:26.964995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.429 [2024-07-27 01:34:26.965026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:26.976255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:26.977438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:26.977471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:26.988725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:26.989898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:26.989931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.001179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:27.002365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.002420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.013552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:27.014826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.014859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.025944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:27.027167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.027209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.038295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:27.039538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.039571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.050559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:27.051825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.051857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.062944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:27.064222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.064264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.075283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:27.076547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.076580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.087575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:27.088917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.088950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.099981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea680 00:26:35.430 [2024-07-27 01:34:27.101264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.101305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.112399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea248 00:26:35.430 [2024-07-27 01:34:27.113650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.113683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.124846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea248 00:26:35.430 [2024-07-27 01:34:27.126027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.126065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.138889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea248 00:26:35.430 [2024-07-27 01:34:27.140250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.140279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.151447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ea248 00:26:35.430 [2024-07-27 01:34:27.152836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.152868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.162344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:35.430 [2024-07-27 01:34:27.163757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.163790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:35.430 [2024-07-27 01:34:27.174769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:35.430 [2024-07-27 01:34:27.176034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.430 [2024-07-27 01:34:27.176073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.187293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:35.691 [2024-07-27 01:34:27.188582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.188614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.199766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:35.691 [2024-07-27 01:34:27.200955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.200987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.213753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:35.691 [2024-07-27 01:34:27.215046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.215089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.224578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f2d80 00:26:35.691 [2024-07-27 01:34:27.225975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.226008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.238610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f2d80 00:26:35.691 [2024-07-27 01:34:27.239975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.240026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.249493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ee190 00:26:35.691 [2024-07-27 01:34:27.251044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.251113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.262080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ee190 00:26:35.691 [2024-07-27 01:34:27.263594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.263628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.276231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ee190 00:26:35.691 [2024-07-27 01:34:27.277471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.277509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.287240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e8d30 00:26:35.691 [2024-07-27 01:34:27.288707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.288740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.301161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e8d30 00:26:35.691 [2024-07-27 01:34:27.302402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.302449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.313565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f3a28 00:26:35.691 [2024-07-27 01:34:27.314812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.314849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.325972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ee5c8 00:26:35.691 [2024-07-27 01:34:27.327240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.327269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.338340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fa7d8 00:26:35.691 [2024-07-27 01:34:27.339547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.339579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.350754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed920 00:26:35.691 [2024-07-27 01:34:27.352006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.352038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.363066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed4e8 00:26:35.691 [2024-07-27 01:34:27.363913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.363945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.375281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190efae0 00:26:35.691 [2024-07-27 01:34:27.376133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.376178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.387730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f1430 00:26:35.691 [2024-07-27 01:34:27.388845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.388882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.400033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed920 00:26:35.691 [2024-07-27 01:34:27.401133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.401166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.412325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ee190 00:26:35.691 [2024-07-27 01:34:27.413127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.413159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.424651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f92c0 00:26:35.691 [2024-07-27 01:34:27.425797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.425828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:35.691 [2024-07-27 01:34:27.437209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9e10 00:26:35.691 [2024-07-27 01:34:27.438018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.691 [2024-07-27 01:34:27.438050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:35.952 [2024-07-27 01:34:27.449578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f96f8 00:26:35.952 [2024-07-27 01:34:27.450356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.952 [2024-07-27 01:34:27.450385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:35.952 [2024-07-27 01:34:27.461761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fb480 00:26:35.952 [2024-07-27 01:34:27.462564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.952 [2024-07-27 01:34:27.462597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:35.952 [2024-07-27 01:34:27.474031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fb480 00:26:35.952 [2024-07-27 01:34:27.475667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.952 [2024-07-27 01:34:27.475699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:35.952 [2024-07-27 01:34:27.486217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ebfd0 00:26:35.952 [2024-07-27 01:34:27.487660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.952 [2024-07-27 01:34:27.487693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.498699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fa7d8 00:26:35.953 [2024-07-27 01:34:27.500192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.500220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.511138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f1868 00:26:35.953 [2024-07-27 01:34:27.512613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.512647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.523611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f2d80 00:26:35.953 [2024-07-27 01:34:27.525108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.525141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.535963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190edd58 00:26:35.953 [2024-07-27 01:34:27.537560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.537592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.548401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f46d0 00:26:35.953 [2024-07-27 01:34:27.549511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.549544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.560715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f46d0 00:26:35.953 [2024-07-27 01:34:27.562109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.562152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.573134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e8d30 00:26:35.953 [2024-07-27 01:34:27.574526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.574558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.585647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:35.953 [2024-07-27 01:34:27.587056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.587095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.598194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f9f68 00:26:35.953 [2024-07-27 01:34:27.599591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.599624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.610656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f5378 00:26:35.953 [2024-07-27 01:34:27.612105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.612141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.623162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f0bc0 00:26:35.953 [2024-07-27 01:34:27.624576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.624608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.635585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190edd58 00:26:35.953 [2024-07-27 01:34:27.637043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.637082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.646618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f8a50 00:26:35.953 [2024-07-27 01:34:27.647695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.647728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.659174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e8d30 00:26:35.953 [2024-07-27 01:34:27.660244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.660272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.671574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190eaef0 00:26:35.953 [2024-07-27 01:34:27.672686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.672718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.683954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f57b0 00:26:35.953 [2024-07-27 01:34:27.685066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.685097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.696302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fc998 00:26:35.953 [2024-07-27 01:34:27.697443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.953 [2024-07-27 01:34:27.697476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:35.953 [2024-07-27 01:34:27.708679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fc998 00:26:36.212 [2024-07-27 01:34:27.709832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.212 [2024-07-27 01:34:27.709871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:36.212 [2024-07-27 01:34:27.720986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fc998 00:26:36.212 [2024-07-27 01:34:27.722166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.212 [2024-07-27 01:34:27.722208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:36.212 [2024-07-27 01:34:27.733332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fc998 00:26:36.213 [2024-07-27 01:34:27.734543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.734576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.745590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fc998 00:26:36.213 [2024-07-27 01:34:27.746835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.746882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.757739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ef270 00:26:36.213 [2024-07-27 01:34:27.758692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.758725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.770076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190eea00 00:26:36.213 [2024-07-27 01:34:27.771092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.771139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.782416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f92c0 00:26:36.213 [2024-07-27 01:34:27.783426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.783460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.795163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f4f40 00:26:36.213 [2024-07-27 01:34:27.795859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.795891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.807790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f92c0 00:26:36.213 [2024-07-27 01:34:27.808767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.808799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.820136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f1868 00:26:36.213 [2024-07-27 01:34:27.821165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.821195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.832686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190efae0 00:26:36.213 [2024-07-27 01:34:27.833696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.833728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.845218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f2510 00:26:36.213 [2024-07-27 01:34:27.846247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.846279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.857831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.213 [2024-07-27 01:34:27.858884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.858917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.870270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.213 [2024-07-27 01:34:27.871307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.871338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.882835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.213 [2024-07-27 01:34:27.883888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.883921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.895305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.213 [2024-07-27 01:34:27.896377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.896408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.907728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.213 [2024-07-27 01:34:27.908817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.908850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.920227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.213 [2024-07-27 01:34:27.921359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.921387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.932775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.213 [2024-07-27 01:34:27.933909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.933941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.945251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.213 [2024-07-27 01:34:27.946367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.946398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.213 [2024-07-27 01:34:27.957693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.213 [2024-07-27 01:34:27.958830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.213 [2024-07-27 01:34:27.958862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:27.970133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:27.971302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:27.971345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:27.982612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:27.983814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:27.983846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:27.995119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:27.996333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:27.996365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:28.007515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:28.008805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:28.008838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:28.020080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:28.021329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:28.021371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:28.032506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:28.033766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:28.033804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:28.044985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:28.046235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:28.046277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:28.057405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:28.058673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:28.058705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:28.069876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:28.071162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:28.071189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:28.082303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:28.083544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:28.083576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:28.094646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.472 [2024-07-27 01:34:28.095898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.472 [2024-07-27 01:34:28.095930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.472 [2024-07-27 01:34:28.106870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e84c0 00:26:36.473 [2024-07-27 01:34:28.108142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.108170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:36.473 [2024-07-27 01:34:28.119209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e8d30 00:26:36.473 [2024-07-27 01:34:28.120630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.120662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:36.473 [2024-07-27 01:34:28.131550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190efae0 00:26:36.473 [2024-07-27 01:34:28.132921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.132953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:36.473 [2024-07-27 01:34:28.143716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f1430 00:26:36.473 [2024-07-27 01:34:28.145025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.145069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:36.473 [2024-07-27 01:34:28.155977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190eb760 00:26:36.473 [2024-07-27 01:34:28.157367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.157393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:36.473 [2024-07-27 01:34:28.169866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ee5c8 00:26:36.473 [2024-07-27 01:34:28.170971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.171003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.473 [2024-07-27 01:34:28.182202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f6020 00:26:36.473 [2024-07-27 01:34:28.183095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.183132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:36.473 [2024-07-27 01:34:28.194688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f7da8 00:26:36.473 [2024-07-27 01:34:28.195773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.195805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:36.473 [2024-07-27 01:34:28.206928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ed920 00:26:36.473 [2024-07-27 01:34:28.207838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.207871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:36.473 [2024-07-27 01:34:28.219196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fb8b8 00:26:36.473 [2024-07-27 01:34:28.220093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.473 [2024-07-27 01:34:28.220140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:36.732 [2024-07-27 01:34:28.231266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e9e10 00:26:36.732 [2024-07-27 01:34:28.233121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.732 [2024-07-27 01:34:28.233150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:36.732 [2024-07-27 01:34:28.243635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f57b0 00:26:36.732 [2024-07-27 01:34:28.245159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.732 [2024-07-27 01:34:28.245187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:36.732 [2024-07-27 01:34:28.256022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f1868 00:26:36.732 [2024-07-27 01:34:28.257602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.732 [2024-07-27 01:34:28.257635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:36.732 [2024-07-27 01:34:28.268532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190eb328 00:26:36.732 [2024-07-27 01:34:28.270138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.732 [2024-07-27 01:34:28.270183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:36.732 [2024-07-27 01:34:28.280921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190e95a0 00:26:36.732 [2024-07-27 01:34:28.282548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.732 [2024-07-27 01:34:28.282582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:36.732 [2024-07-27 01:34:28.293425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fa7d8 00:26:36.732 [2024-07-27 01:34:28.295002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.732 [2024-07-27 01:34:28.295033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.732 [2024-07-27 01:34:28.305868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f6890 00:26:36.732 [2024-07-27 01:34:28.307524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.732 [2024-07-27 01:34:28.307557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.732 [2024-07-27 01:34:28.316507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fac10 00:26:36.732 [2024-07-27 01:34:28.317441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.732 [2024-07-27 01:34:28.317473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:36.732 [2024-07-27 01:34:28.329007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f92c0 00:26:36.733 [2024-07-27 01:34:28.329905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.329936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:36.733 [2024-07-27 01:34:28.341400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f4f40 00:26:36.733 [2024-07-27 01:34:28.342298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.342326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:36.733 [2024-07-27 01:34:28.353830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ee190 00:26:36.733 [2024-07-27 01:34:28.354763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.354811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:36.733 [2024-07-27 01:34:28.366280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f0788 00:26:36.733 [2024-07-27 01:34:28.367243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.367273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:36.733 [2024-07-27 01:34:28.378753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fe720 00:26:36.733 [2024-07-27 01:34:28.379725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.379758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:36.733 [2024-07-27 01:34:28.391194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190fe720 00:26:36.733 [2024-07-27 01:34:28.392171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.392200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:36.733 [2024-07-27 01:34:28.403786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f0788 00:26:36.733 [2024-07-27 01:34:28.404779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.404811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:36.733 [2024-07-27 01:34:28.416302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190ee190 00:26:36.733 [2024-07-27 01:34:28.417329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.417369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:36.733 [2024-07-27 01:34:28.429081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f4f40 00:26:36.733 [2024-07-27 01:34:28.430076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.430123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:36.733 [2024-07-27 01:34:28.441918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x161c1a0) with pdu=0x2000190f92c0 00:26:36.733 [2024-07-27 01:34:28.442930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.733 [2024-07-27 01:34:28.442962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:36.733 00:26:36.733 Latency(us) 00:26:36.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.733 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:36.733 nvme0n1 : 2.00 20464.22 79.94 0.00 0.00 6246.72 3155.44 17087.91 00:26:36.733 =================================================================================================================== 00:26:36.733 Total : 20464.22 79.94 0.00 0.00 6246.72 3155.44 17087.91 00:26:36.733 0 00:26:36.733 01:34:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:36.733 01:34:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:36.733 01:34:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:36.733 01:34:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:36.733 | .driver_specific 00:26:36.733 | .nvme_error 00:26:36.733 | .status_code 00:26:36.733 | .command_transient_transport_error' 00:26:36.993 01:34:28 -- host/digest.sh@71 -- # (( 160 > 0 )) 00:26:36.993 01:34:28 -- host/digest.sh@73 -- # killprocess 746887 00:26:36.993 01:34:28 -- common/autotest_common.sh@926 -- # '[' -z 746887 ']' 00:26:36.993 01:34:28 -- common/autotest_common.sh@930 -- # kill -0 746887 00:26:36.993 01:34:28 -- common/autotest_common.sh@931 -- # uname 00:26:36.993 01:34:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:36.993 01:34:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 746887 00:26:36.993 01:34:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:36.993 01:34:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:36.993 01:34:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 746887' 00:26:36.993 killing process with pid 746887 00:26:36.993 01:34:28 -- common/autotest_common.sh@945 -- # kill 746887 00:26:36.993 Received shutdown signal, test time was about 2.000000 seconds 00:26:36.993 00:26:36.993 Latency(us) 00:26:36.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.993 =================================================================================================================== 00:26:36.993 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:36.993 01:34:28 -- common/autotest_common.sh@950 -- # wait 746887 00:26:37.564 01:34:29 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:26:37.564 01:34:29 -- host/digest.sh@54 -- # local rw bs qd 00:26:37.564 01:34:29 -- host/digest.sh@56 -- # rw=randwrite 00:26:37.564 01:34:29 -- host/digest.sh@56 -- # bs=131072 00:26:37.564 01:34:29 -- host/digest.sh@56 -- # qd=16 00:26:37.564 01:34:29 -- host/digest.sh@58 -- # bperfpid=747444 00:26:37.564 01:34:29 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:37.564 01:34:29 -- host/digest.sh@60 -- # waitforlisten 747444 /var/tmp/bperf.sock 00:26:37.564 01:34:29 -- common/autotest_common.sh@819 -- # '[' -z 747444 ']' 00:26:37.564 01:34:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.564 01:34:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:37.564 01:34:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.564 01:34:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:37.564 01:34:29 -- common/autotest_common.sh@10 -- # set +x 00:26:37.564 [2024-07-27 01:34:29.050343] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:37.564 [2024-07-27 01:34:29.050413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747444 ] 00:26:37.564 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:37.564 Zero copy mechanism will not be used. 00:26:37.564 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.564 [2024-07-27 01:34:29.110727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.564 [2024-07-27 01:34:29.221858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.499 01:34:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:38.499 01:34:29 -- common/autotest_common.sh@852 -- # return 0 00:26:38.499 01:34:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.499 01:34:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.499 01:34:30 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:38.499 01:34:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.499 01:34:30 -- common/autotest_common.sh@10 -- # set +x 00:26:38.499 01:34:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.499 01:34:30 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.499 01:34:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.085 nvme0n1 00:26:39.085 01:34:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:39.085 01:34:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.085 01:34:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.085 01:34:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.085 01:34:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:39.085 01:34:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:39.085 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:39.086 Zero copy mechanism will not be used. 00:26:39.086 Running I/O for 2 seconds... 00:26:39.086 [2024-07-27 01:34:30.789425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.086 [2024-07-27 01:34:30.789726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.086 [2024-07-27 01:34:30.789769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.086 [2024-07-27 01:34:30.806499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.086 [2024-07-27 01:34:30.806925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.086 [2024-07-27 01:34:30.806959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.086 [2024-07-27 01:34:30.824410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.086 [2024-07-27 01:34:30.824897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.086 [2024-07-27 01:34:30.824927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.086 [2024-07-27 01:34:30.841728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.086 [2024-07-27 01:34:30.842444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.086 [2024-07-27 01:34:30.842474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:30.860781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:30.861283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:30.861311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:30.879227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:30.879719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:30.879749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:30.899381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:30.899755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:30.899785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:30.916944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:30.917452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:30.917482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:30.936014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:30.936587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:30.936618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:30.954558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:30.955036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:30.955072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:30.973055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:30.973508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:30.973536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:30.992224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:30.992740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:30.992768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:31.011086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:31.011582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:31.011610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:31.028424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:31.028784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:31.028829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:31.046406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.346 [2024-07-27 01:34:31.047051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.346 [2024-07-27 01:34:31.047107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.346 [2024-07-27 01:34:31.061691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.347 [2024-07-27 01:34:31.062320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.347 [2024-07-27 01:34:31.062351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.347 [2024-07-27 01:34:31.076019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.347 [2024-07-27 01:34:31.076446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.347 [2024-07-27 01:34:31.076486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.347 [2024-07-27 01:34:31.092315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.347 [2024-07-27 01:34:31.092876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.347 [2024-07-27 01:34:31.092904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.606 [2024-07-27 01:34:31.109701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.606 [2024-07-27 01:34:31.110135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.606 [2024-07-27 01:34:31.110165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.606 [2024-07-27 01:34:31.125144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.606 [2024-07-27 01:34:31.125524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.606 [2024-07-27 01:34:31.125552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.606 [2024-07-27 01:34:31.141679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.606 [2024-07-27 01:34:31.142173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.606 [2024-07-27 01:34:31.142207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.606 [2024-07-27 01:34:31.160489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.606 [2024-07-27 01:34:31.160936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.606 [2024-07-27 01:34:31.160965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.606 [2024-07-27 01:34:31.178871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.606 [2024-07-27 01:34:31.179423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.606 [2024-07-27 01:34:31.179453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.606 [2024-07-27 01:34:31.197266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.606 [2024-07-27 01:34:31.197787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.606 [2024-07-27 01:34:31.197815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.606 [2024-07-27 01:34:31.215377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.606 [2024-07-27 01:34:31.215931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.606 [2024-07-27 01:34:31.215960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.607 [2024-07-27 01:34:31.232333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.607 [2024-07-27 01:34:31.232794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.607 [2024-07-27 01:34:31.232823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.607 [2024-07-27 01:34:31.250531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.607 [2024-07-27 01:34:31.250913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.607 [2024-07-27 01:34:31.250941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.607 [2024-07-27 01:34:31.267990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.607 [2024-07-27 01:34:31.268519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.607 [2024-07-27 01:34:31.268548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.607 [2024-07-27 01:34:31.286305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.607 [2024-07-27 01:34:31.286885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.607 [2024-07-27 01:34:31.286913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.607 [2024-07-27 01:34:31.304398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.607 [2024-07-27 01:34:31.304774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.607 [2024-07-27 01:34:31.304803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.607 [2024-07-27 01:34:31.323094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.607 [2024-07-27 01:34:31.323557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.607 [2024-07-27 01:34:31.323585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.607 [2024-07-27 01:34:31.341194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.607 [2024-07-27 01:34:31.341750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.607 [2024-07-27 01:34:31.341779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.607 [2024-07-27 01:34:31.360568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.607 [2024-07-27 01:34:31.361007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.607 [2024-07-27 01:34:31.361036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.379327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.379848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.379876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.398149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.398683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.398711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.417138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.417578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.417607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.433706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.434153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.434181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.451706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.452259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.452292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.468676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.469222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.469251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.486497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.486866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.486897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.505086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.505561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.505598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.523434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.524084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.524127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.541847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.542345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.542375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.560831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.561358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.561387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.578967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.579367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.579396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.597459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.597933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.597962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.865 [2024-07-27 01:34:31.614909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:39.865 [2024-07-27 01:34:31.615369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.865 [2024-07-27 01:34:31.615399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.123 [2024-07-27 01:34:31.634116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.123 [2024-07-27 01:34:31.634559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.123 [2024-07-27 01:34:31.634588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.123 [2024-07-27 01:34:31.652842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.123 [2024-07-27 01:34:31.653382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.123 [2024-07-27 01:34:31.653412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.123 [2024-07-27 01:34:31.670243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.123 [2024-07-27 01:34:31.670761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.123 [2024-07-27 01:34:31.670793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.123 [2024-07-27 01:34:31.688851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.123 [2024-07-27 01:34:31.689410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.123 [2024-07-27 01:34:31.689439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.123 [2024-07-27 01:34:31.707470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.123 [2024-07-27 01:34:31.707967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.123 [2024-07-27 01:34:31.707995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.123 [2024-07-27 01:34:31.725261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.123 [2024-07-27 01:34:31.725650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.123 [2024-07-27 01:34:31.725678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.123 [2024-07-27 01:34:31.742318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.123 [2024-07-27 01:34:31.742811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.123 [2024-07-27 01:34:31.742839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.123 [2024-07-27 01:34:31.758898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.123 [2024-07-27 01:34:31.759482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.123 [2024-07-27 01:34:31.759511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.123 [2024-07-27 01:34:31.776426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.124 [2024-07-27 01:34:31.776821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.124 [2024-07-27 01:34:31.776865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.124 [2024-07-27 01:34:31.793808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.124 [2024-07-27 01:34:31.794415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.124 [2024-07-27 01:34:31.794449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.124 [2024-07-27 01:34:31.811841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.124 [2024-07-27 01:34:31.812373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.124 [2024-07-27 01:34:31.812401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.124 [2024-07-27 01:34:31.830342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.124 [2024-07-27 01:34:31.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.124 [2024-07-27 01:34:31.830740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.124 [2024-07-27 01:34:31.847382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.124 [2024-07-27 01:34:31.847905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.124 [2024-07-27 01:34:31.847933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.124 [2024-07-27 01:34:31.864711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.124 [2024-07-27 01:34:31.865336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.124 [2024-07-27 01:34:31.865377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.381 [2024-07-27 01:34:31.882716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.381 [2024-07-27 01:34:31.882983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.381 [2024-07-27 01:34:31.883013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.381 [2024-07-27 01:34:31.900697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.381 [2024-07-27 01:34:31.901254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:31.901288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:31.918381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:31.918824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:31.918857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:31.936667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:31.937290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:31.937320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:31.955831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:31.956333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:31.956363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:31.974945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:31.975509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:31.975545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:31.994295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:31.994781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:31.994810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:32.013549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:32.014057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:32.014093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:32.032499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:32.032955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:32.032983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:32.051915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:32.052556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:32.052585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:32.069706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:32.070143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:32.070172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:32.087509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:32.087932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:32.087961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:32.106350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:32.106762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:32.106791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.382 [2024-07-27 01:34:32.123688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.382 [2024-07-27 01:34:32.124172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.382 [2024-07-27 01:34:32.124202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.140831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.141354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.141384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.159071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.159583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.159611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.178136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.178491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.178534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.195700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.196194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.196224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.213358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.213707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.213736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.231296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.231765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.231794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.250079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.250619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.250648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.267231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.267862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.267891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.283988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.284500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.284529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.303290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.303961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.303990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.322216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.322642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.322670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.340483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.341043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.341083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.358319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.358756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.358785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.376806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.377352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.377382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.642 [2024-07-27 01:34:32.396475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.642 [2024-07-27 01:34:32.397203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.642 [2024-07-27 01:34:32.397232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.901 [2024-07-27 01:34:32.414525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.901 [2024-07-27 01:34:32.414927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.414960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.432885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.433451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.433496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.450999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.451402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.451450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.469784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.470268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.470300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.488074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.488614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.488661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.506329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.506776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.506819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.525344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.525712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.525742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.543180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.543667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.543696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.561821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.562369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.562398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.580892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.581455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.581483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.599290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.599751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.599794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.616583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.617103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.617133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.635766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.636142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.636172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.902 [2024-07-27 01:34:32.653643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:40.902 [2024-07-27 01:34:32.654261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.902 [2024-07-27 01:34:32.654290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.161 [2024-07-27 01:34:32.671828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:41.161 [2024-07-27 01:34:32.672257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.161 [2024-07-27 01:34:32.672287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.161 [2024-07-27 01:34:32.690362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:41.161 [2024-07-27 01:34:32.690866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.161 [2024-07-27 01:34:32.690895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.161 [2024-07-27 01:34:32.708807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:41.161 [2024-07-27 01:34:32.709316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.161 [2024-07-27 01:34:32.709345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.161 [2024-07-27 01:34:32.726990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:41.161 [2024-07-27 01:34:32.727478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.161 [2024-07-27 01:34:32.727508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.161 [2024-07-27 01:34:32.745315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:41.161 [2024-07-27 01:34:32.745732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.161 [2024-07-27 01:34:32.745761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.161 [2024-07-27 01:34:32.762049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:41.161 [2024-07-27 01:34:32.762715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.161 [2024-07-27 01:34:32.762748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.161 [2024-07-27 01:34:32.779652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x148a210) with pdu=0x2000190fef90 00:26:41.161 [2024-07-27 01:34:32.780213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.161 [2024-07-27 01:34:32.780242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.161 00:26:41.161 Latency(us) 00:26:41.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.161 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:41.161 nvme0n1 : 2.01 1716.17 214.52 0.00 0.00 9294.83 4053.52 19515.16 00:26:41.161 =================================================================================================================== 00:26:41.161 Total : 1716.17 214.52 0.00 0.00 9294.83 4053.52 19515.16 00:26:41.161 0 00:26:41.161 01:34:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:41.161 01:34:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:41.161 01:34:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:41.161 01:34:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:41.161 | .driver_specific 00:26:41.161 | .nvme_error 00:26:41.161 | .status_code 00:26:41.161 | .command_transient_transport_error' 00:26:41.422 01:34:33 -- host/digest.sh@71 -- # (( 111 > 0 )) 00:26:41.422 01:34:33 -- host/digest.sh@73 -- # killprocess 747444 00:26:41.422 01:34:33 -- common/autotest_common.sh@926 -- # '[' -z 747444 ']' 00:26:41.422 01:34:33 -- common/autotest_common.sh@930 -- # kill -0 747444 00:26:41.422 01:34:33 -- common/autotest_common.sh@931 -- # uname 00:26:41.422 01:34:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:41.422 01:34:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 747444 00:26:41.422 01:34:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:41.422 01:34:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:41.422 01:34:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 747444' 00:26:41.422 killing process with pid 747444 00:26:41.422 01:34:33 -- common/autotest_common.sh@945 -- # kill 747444 00:26:41.422 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.422 00:26:41.422 Latency(us) 00:26:41.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.422 =================================================================================================================== 00:26:41.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.422 01:34:33 -- common/autotest_common.sh@950 -- # wait 747444 00:26:41.681 01:34:33 -- host/digest.sh@115 -- # killprocess 745759 00:26:41.681 01:34:33 -- common/autotest_common.sh@926 -- # '[' -z 745759 ']' 00:26:41.681 01:34:33 -- common/autotest_common.sh@930 -- # kill -0 745759 00:26:41.681 01:34:33 -- common/autotest_common.sh@931 -- # uname 00:26:41.681 01:34:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:41.681 01:34:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 745759 00:26:41.681 01:34:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:41.681 01:34:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:41.681 01:34:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 745759' 00:26:41.681 killing process with pid 745759 00:26:41.681 01:34:33 -- common/autotest_common.sh@945 -- # kill 745759 00:26:41.681 01:34:33 -- common/autotest_common.sh@950 -- # wait 745759 00:26:41.939 00:26:41.939 real 0m18.314s 00:26:41.939 user 0m35.681s 00:26:41.939 sys 0m4.401s 00:26:41.939 01:34:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.939 01:34:33 -- common/autotest_common.sh@10 -- # set +x 00:26:41.939 ************************************ 00:26:41.939 END TEST nvmf_digest_error 00:26:41.939 ************************************ 00:26:41.939 01:34:33 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:26:41.939 01:34:33 -- host/digest.sh@139 -- # nvmftestfini 00:26:41.939 01:34:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:41.939 01:34:33 -- nvmf/common.sh@116 -- # sync 00:26:41.939 01:34:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:41.939 01:34:33 -- nvmf/common.sh@119 -- # set +e 00:26:41.939 01:34:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:41.939 01:34:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:41.939 rmmod nvme_tcp 00:26:41.939 rmmod nvme_fabrics 00:26:41.939 rmmod nvme_keyring 00:26:42.198 01:34:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:42.198 01:34:33 -- nvmf/common.sh@123 -- # set -e 00:26:42.198 01:34:33 -- nvmf/common.sh@124 -- # return 0 00:26:42.198 01:34:33 -- nvmf/common.sh@477 -- # '[' -n 745759 ']' 00:26:42.198 01:34:33 -- nvmf/common.sh@478 -- # killprocess 745759 00:26:42.198 01:34:33 -- common/autotest_common.sh@926 -- # '[' -z 745759 ']' 00:26:42.198 01:34:33 -- common/autotest_common.sh@930 -- # kill -0 745759 00:26:42.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (745759) - No such process 00:26:42.198 01:34:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 745759 is not found' 00:26:42.198 Process with pid 745759 is not found 00:26:42.198 01:34:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:42.198 01:34:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:42.198 01:34:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:42.198 01:34:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.198 01:34:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:42.198 01:34:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.198 01:34:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.198 01:34:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.102 01:34:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:44.102 00:26:44.102 real 0m38.847s 00:26:44.102 user 1m8.711s 00:26:44.102 sys 0m9.992s 00:26:44.102 01:34:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.102 01:34:35 -- common/autotest_common.sh@10 -- # set +x 00:26:44.102 ************************************ 00:26:44.102 END TEST nvmf_digest 00:26:44.102 ************************************ 00:26:44.102 01:34:35 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:26:44.102 01:34:35 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:26:44.102 01:34:35 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:26:44.102 01:34:35 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:44.102 01:34:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:44.102 01:34:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:44.102 01:34:35 -- common/autotest_common.sh@10 -- # set +x 00:26:44.102 ************************************ 00:26:44.102 START TEST nvmf_bdevperf 00:26:44.102 ************************************ 00:26:44.102 01:34:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:44.102 * Looking for test storage... 00:26:44.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.102 01:34:35 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.102 01:34:35 -- nvmf/common.sh@7 -- # uname -s 00:26:44.102 01:34:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.102 01:34:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.102 01:34:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.102 01:34:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.102 01:34:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.102 01:34:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.102 01:34:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.102 01:34:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.102 01:34:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.102 01:34:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.103 01:34:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:44.103 01:34:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:44.103 01:34:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.103 01:34:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.103 01:34:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.103 01:34:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.103 01:34:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.103 01:34:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.103 01:34:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.103 01:34:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.103 01:34:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.103 01:34:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.103 01:34:35 -- paths/export.sh@5 -- # export PATH 00:26:44.103 01:34:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.103 01:34:35 -- nvmf/common.sh@46 -- # : 0 00:26:44.103 01:34:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:44.103 01:34:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:44.103 01:34:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:44.103 01:34:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.103 01:34:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.103 01:34:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:44.103 01:34:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:44.103 01:34:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:44.103 01:34:35 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:44.103 01:34:35 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:44.103 01:34:35 -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:44.103 01:34:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:44.103 01:34:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.103 01:34:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:44.103 01:34:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:44.103 01:34:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:44.103 01:34:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.103 01:34:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.103 01:34:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.103 01:34:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:44.103 01:34:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:44.103 01:34:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:44.103 01:34:35 -- common/autotest_common.sh@10 -- # set +x 00:26:46.639 01:34:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:46.639 01:34:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:46.639 01:34:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:46.639 01:34:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:46.639 01:34:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:46.639 01:34:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:46.639 01:34:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:46.639 01:34:37 -- nvmf/common.sh@294 -- # net_devs=() 00:26:46.639 01:34:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:46.639 01:34:37 -- nvmf/common.sh@295 -- # e810=() 00:26:46.639 01:34:37 -- nvmf/common.sh@295 -- # local -ga e810 00:26:46.639 01:34:37 -- nvmf/common.sh@296 -- # x722=() 00:26:46.639 01:34:37 -- nvmf/common.sh@296 -- # local -ga x722 00:26:46.639 01:34:37 -- nvmf/common.sh@297 -- # mlx=() 00:26:46.639 01:34:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:46.639 01:34:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.639 01:34:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:46.639 01:34:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:46.639 01:34:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:46.639 01:34:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:46.639 01:34:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:46.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:46.639 01:34:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:46.639 01:34:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:46.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:46.639 01:34:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:46.639 01:34:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:46.639 01:34:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.639 01:34:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:46.639 01:34:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.639 01:34:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:46.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:46.639 01:34:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.639 01:34:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:46.639 01:34:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.639 01:34:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:46.639 01:34:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.639 01:34:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:46.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:46.639 01:34:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.639 01:34:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:46.639 01:34:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:46.639 01:34:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:46.639 01:34:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.639 01:34:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.639 01:34:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.639 01:34:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:46.639 01:34:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.639 01:34:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.639 01:34:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:46.639 01:34:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.639 01:34:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.639 01:34:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:46.639 01:34:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:46.639 01:34:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.639 01:34:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.639 01:34:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.639 01:34:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.639 01:34:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:46.639 01:34:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.639 01:34:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.639 01:34:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.639 01:34:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:46.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:26:46.639 00:26:46.639 --- 10.0.0.2 ping statistics --- 00:26:46.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.639 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:26:46.639 01:34:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:26:46.639 00:26:46.639 --- 10.0.0.1 ping statistics --- 00:26:46.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.639 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:26:46.639 01:34:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.639 01:34:37 -- nvmf/common.sh@410 -- # return 0 00:26:46.639 01:34:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:46.639 01:34:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.639 01:34:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:46.639 01:34:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.639 01:34:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:46.639 01:34:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:46.639 01:34:37 -- host/bdevperf.sh@25 -- # tgt_init 00:26:46.640 01:34:37 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:46.640 01:34:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:46.640 01:34:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:46.640 01:34:37 -- common/autotest_common.sh@10 -- # set +x 00:26:46.640 01:34:37 -- nvmf/common.sh@469 -- # nvmfpid=749954 00:26:46.640 01:34:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:46.640 01:34:37 -- nvmf/common.sh@470 -- # waitforlisten 749954 00:26:46.640 01:34:37 -- common/autotest_common.sh@819 -- # '[' -z 749954 ']' 00:26:46.640 01:34:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.640 01:34:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:46.640 01:34:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.640 01:34:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:46.640 01:34:37 -- common/autotest_common.sh@10 -- # set +x 00:26:46.640 [2024-07-27 01:34:38.028639] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:46.640 [2024-07-27 01:34:38.028719] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.640 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.640 [2024-07-27 01:34:38.089054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:46.640 [2024-07-27 01:34:38.203567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:46.640 [2024-07-27 01:34:38.203740] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.640 [2024-07-27 01:34:38.203756] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.640 [2024-07-27 01:34:38.203767] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.640 [2024-07-27 01:34:38.203994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.640 [2024-07-27 01:34:38.204054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.640 [2024-07-27 01:34:38.204057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.574 01:34:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:47.574 01:34:39 -- common/autotest_common.sh@852 -- # return 0 00:26:47.574 01:34:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:47.574 01:34:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:47.574 01:34:39 -- common/autotest_common.sh@10 -- # set +x 00:26:47.574 01:34:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.574 01:34:39 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.574 01:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.574 01:34:39 -- common/autotest_common.sh@10 -- # set +x 00:26:47.574 [2024-07-27 01:34:39.026372] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.574 01:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.574 01:34:39 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:47.574 01:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.574 01:34:39 -- common/autotest_common.sh@10 -- # set +x 00:26:47.574 Malloc0 00:26:47.574 01:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.574 01:34:39 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.574 01:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.574 01:34:39 -- common/autotest_common.sh@10 -- # set +x 00:26:47.574 01:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.574 01:34:39 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:47.574 01:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.574 01:34:39 -- common/autotest_common.sh@10 -- # set +x 00:26:47.574 01:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.574 01:34:39 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.574 01:34:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:47.574 01:34:39 -- common/autotest_common.sh@10 -- # set +x 00:26:47.574 [2024-07-27 01:34:39.084699] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.574 01:34:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.574 01:34:39 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:47.574 01:34:39 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:47.574 01:34:39 -- nvmf/common.sh@520 -- # config=() 00:26:47.574 01:34:39 -- nvmf/common.sh@520 -- # local subsystem config 00:26:47.574 01:34:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.574 01:34:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.574 { 00:26:47.574 "params": { 00:26:47.574 "name": "Nvme$subsystem", 00:26:47.574 "trtype": "$TEST_TRANSPORT", 00:26:47.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.574 "adrfam": "ipv4", 00:26:47.574 "trsvcid": "$NVMF_PORT", 00:26:47.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.574 "hdgst": ${hdgst:-false}, 00:26:47.574 "ddgst": ${ddgst:-false} 00:26:47.574 }, 00:26:47.574 "method": "bdev_nvme_attach_controller" 00:26:47.574 } 00:26:47.574 EOF 00:26:47.574 )") 00:26:47.574 01:34:39 -- nvmf/common.sh@542 -- # cat 00:26:47.574 01:34:39 -- nvmf/common.sh@544 -- # jq . 00:26:47.574 01:34:39 -- nvmf/common.sh@545 -- # IFS=, 00:26:47.574 01:34:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:47.574 "params": { 00:26:47.574 "name": "Nvme1", 00:26:47.574 "trtype": "tcp", 00:26:47.574 "traddr": "10.0.0.2", 00:26:47.574 "adrfam": "ipv4", 00:26:47.574 "trsvcid": "4420", 00:26:47.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:47.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:47.574 "hdgst": false, 00:26:47.574 "ddgst": false 00:26:47.574 }, 00:26:47.574 "method": "bdev_nvme_attach_controller" 00:26:47.574 }' 00:26:47.574 [2024-07-27 01:34:39.123733] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:47.574 [2024-07-27 01:34:39.123808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750114 ] 00:26:47.574 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.574 [2024-07-27 01:34:39.182862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.574 [2024-07-27 01:34:39.289252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.834 Running I/O for 1 seconds... 00:26:48.771 00:26:48.771 Latency(us) 00:26:48.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.771 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:48.771 Verification LBA range: start 0x0 length 0x4000 00:26:48.771 Nvme1n1 : 1.01 12901.77 50.40 0.00 0.00 9877.67 1201.49 15728.64 00:26:48.771 =================================================================================================================== 00:26:48.771 Total : 12901.77 50.40 0.00 0.00 9877.67 1201.49 15728.64 00:26:49.029 01:34:40 -- host/bdevperf.sh@30 -- # bdevperfpid=750261 00:26:49.029 01:34:40 -- host/bdevperf.sh@32 -- # sleep 3 00:26:49.029 01:34:40 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:49.029 01:34:40 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:49.029 01:34:40 -- nvmf/common.sh@520 -- # config=() 00:26:49.029 01:34:40 -- nvmf/common.sh@520 -- # local subsystem config 00:26:49.029 01:34:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.029 01:34:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.029 { 00:26:49.029 "params": { 00:26:49.029 "name": "Nvme$subsystem", 00:26:49.029 "trtype": "$TEST_TRANSPORT", 00:26:49.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.029 "adrfam": "ipv4", 00:26:49.029 "trsvcid": "$NVMF_PORT", 00:26:49.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.029 "hdgst": ${hdgst:-false}, 00:26:49.029 "ddgst": ${ddgst:-false} 00:26:49.029 }, 00:26:49.029 "method": "bdev_nvme_attach_controller" 00:26:49.029 } 00:26:49.029 EOF 00:26:49.029 )") 00:26:49.029 01:34:40 -- nvmf/common.sh@542 -- # cat 00:26:49.029 01:34:40 -- nvmf/common.sh@544 -- # jq . 00:26:49.029 01:34:40 -- nvmf/common.sh@545 -- # IFS=, 00:26:49.029 01:34:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:49.029 "params": { 00:26:49.029 "name": "Nvme1", 00:26:49.029 "trtype": "tcp", 00:26:49.029 "traddr": "10.0.0.2", 00:26:49.029 "adrfam": "ipv4", 00:26:49.029 "trsvcid": "4420", 00:26:49.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:49.029 "hdgst": false, 00:26:49.029 "ddgst": false 00:26:49.029 }, 00:26:49.029 "method": "bdev_nvme_attach_controller" 00:26:49.029 }' 00:26:49.287 [2024-07-27 01:34:40.795937] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:49.287 [2024-07-27 01:34:40.796015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750261 ] 00:26:49.287 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.287 [2024-07-27 01:34:40.855471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.287 [2024-07-27 01:34:40.961250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.547 Running I/O for 15 seconds... 00:26:52.093 01:34:43 -- host/bdevperf.sh@33 -- # kill -9 749954 00:26:52.093 01:34:43 -- host/bdevperf.sh@35 -- # sleep 3 00:26:52.093 [2024-07-27 01:34:43.771065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.093 [2024-07-27 01:34:43.771153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.093 [2024-07-27 01:34:43.771187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.093 [2024-07-27 01:34:43.771204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.093 [2024-07-27 01:34:43.771223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.093 [2024-07-27 01:34:43.771238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.093 [2024-07-27 01:34:43.771255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.093 [2024-07-27 01:34:43.771270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.093 [2024-07-27 01:34:43.771286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.093 [2024-07-27 01:34:43.771302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.093 [2024-07-27 01:34:43.771319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.093 [2024-07-27 01:34:43.771350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.093 [2024-07-27 01:34:43.771368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.093 [2024-07-27 01:34:43.771385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.771976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.771991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.094 [2024-07-27 01:34:43.772688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.094 [2024-07-27 01:34:43.772703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.772720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.772736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.772753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.772768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.772785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.772801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.772818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.772833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.772850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.772866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.772883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.772903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.772920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.772936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.772952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.772968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.772985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.095 [2024-07-27 01:34:43.773964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.773981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.095 [2024-07-27 01:34:43.773996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.095 [2024-07-27 01:34:43.774012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.774936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.774968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.774985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.775004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.775038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.775078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.775126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.096 [2024-07-27 01:34:43.775155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.775185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.775214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.775243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.775273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.096 [2024-07-27 01:34:43.775302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.096 [2024-07-27 01:34:43.775317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.097 [2024-07-27 01:34:43.775339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.097 [2024-07-27 01:34:43.775373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.097 [2024-07-27 01:34:43.775389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.097 [2024-07-27 01:34:43.775406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.097 [2024-07-27 01:34:43.775421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.097 [2024-07-27 01:34:43.775441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bf3a0 is same with the state(5) to be set 00:26:52.097 [2024-07-27 01:34:43.775462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:52.097 [2024-07-27 01:34:43.775474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:52.097 [2024-07-27 01:34:43.775487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5520 len:8 PRP1 0x0 PRP2 0x0 00:26:52.097 [2024-07-27 01:34:43.775502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.097 [2024-07-27 01:34:43.775573] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12bf3a0 was disconnected and freed. reset controller. 00:26:52.097 [2024-07-27 01:34:43.778162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-27 01:34:43.778243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.097 [2024-07-27 01:34:43.778936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.779193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.779219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-27 01:34:43.779235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.097 [2024-07-27 01:34:43.779407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.097 [2024-07-27 01:34:43.779577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-27 01:34:43.779601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-27 01:34:43.779620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-27 01:34:43.781863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-27 01:34:43.790775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-27 01:34:43.791177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.791360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.791391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-27 01:34:43.791409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.097 [2024-07-27 01:34:43.791595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.097 [2024-07-27 01:34:43.791765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-27 01:34:43.791790] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-27 01:34:43.791806] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-27 01:34:43.794090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-27 01:34:43.803287] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-27 01:34:43.803659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.803915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.803956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-27 01:34:43.803978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.097 [2024-07-27 01:34:43.804234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.097 [2024-07-27 01:34:43.804368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-27 01:34:43.804394] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-27 01:34:43.804410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-27 01:34:43.806632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-27 01:34:43.815801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-27 01:34:43.816170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.816351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.816379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-27 01:34:43.816397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.097 [2024-07-27 01:34:43.816598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.097 [2024-07-27 01:34:43.816785] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-27 01:34:43.816810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-27 01:34:43.816827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-27 01:34:43.819221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-27 01:34:43.828201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-27 01:34:43.828565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.828834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.828876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-27 01:34:43.828892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.097 [2024-07-27 01:34:43.829096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.097 [2024-07-27 01:34:43.829284] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-27 01:34:43.829310] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-27 01:34:43.829326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-27 01:34:43.831748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.097 [2024-07-27 01:34:43.841070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.097 [2024-07-27 01:34:43.841468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.841659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.097 [2024-07-27 01:34:43.841687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.097 [2024-07-27 01:34:43.841706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.097 [2024-07-27 01:34:43.841877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.097 [2024-07-27 01:34:43.842047] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.097 [2024-07-27 01:34:43.842088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.097 [2024-07-27 01:34:43.842106] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.097 [2024-07-27 01:34:43.844422] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-27 01:34:43.853729] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-27 01:34:43.854080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.854284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.854314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-27 01:34:43.854333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.357 [2024-07-27 01:34:43.854500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.357 [2024-07-27 01:34:43.854669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-27 01:34:43.854695] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-27 01:34:43.854712] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-27 01:34:43.856903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-27 01:34:43.866201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-27 01:34:43.866586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.866784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.866812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-27 01:34:43.866830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.357 [2024-07-27 01:34:43.866978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.357 [2024-07-27 01:34:43.867178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-27 01:34:43.867218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-27 01:34:43.867236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-27 01:34:43.869443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-27 01:34:43.879008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-27 01:34:43.879380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.879598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.879625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-27 01:34:43.879644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.357 [2024-07-27 01:34:43.879829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.357 [2024-07-27 01:34:43.880022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-27 01:34:43.880047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-27 01:34:43.880078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-27 01:34:43.882304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-27 01:34:43.891634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-27 01:34:43.891949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.892183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.892214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-27 01:34:43.892232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.357 [2024-07-27 01:34:43.892399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.357 [2024-07-27 01:34:43.892604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-27 01:34:43.892629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-27 01:34:43.892645] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-27 01:34:43.895012] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-27 01:34:43.904176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-27 01:34:43.904567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.904800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.904847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-27 01:34:43.904866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.357 [2024-07-27 01:34:43.904997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.357 [2024-07-27 01:34:43.905199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-27 01:34:43.905226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-27 01:34:43.905243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-27 01:34:43.907647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-27 01:34:43.916586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-27 01:34:43.916946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.917207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.917239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-27 01:34:43.917258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.357 [2024-07-27 01:34:43.917443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.357 [2024-07-27 01:34:43.917613] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-27 01:34:43.917644] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-27 01:34:43.917662] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-27 01:34:43.919993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-27 01:34:43.929215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-27 01:34:43.929596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.929787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.929815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-27 01:34:43.929833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.357 [2024-07-27 01:34:43.929981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.357 [2024-07-27 01:34:43.930164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.357 [2024-07-27 01:34:43.930190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.357 [2024-07-27 01:34:43.930207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.357 [2024-07-27 01:34:43.932503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.357 [2024-07-27 01:34:43.941903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.357 [2024-07-27 01:34:43.942291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.942500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.357 [2024-07-27 01:34:43.942527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.357 [2024-07-27 01:34:43.942546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.357 [2024-07-27 01:34:43.942748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.357 [2024-07-27 01:34:43.942955] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:43.942981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:43.942997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-27 01:34:43.945412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-27 01:34:43.954612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-27 01:34:43.955034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:43.955274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:43.955302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-27 01:34:43.955318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.358 [2024-07-27 01:34:43.955551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.358 [2024-07-27 01:34:43.955772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:43.955797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:43.955819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-27 01:34:43.958181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-27 01:34:43.967237] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-27 01:34:43.967623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:43.967847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:43.967898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-27 01:34:43.967915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.358 [2024-07-27 01:34:43.968149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.358 [2024-07-27 01:34:43.968303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:43.968328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:43.968345] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-27 01:34:43.970624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-27 01:34:43.979908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-27 01:34:43.980322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:43.980543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:43.980570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-27 01:34:43.980588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.358 [2024-07-27 01:34:43.980808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.358 [2024-07-27 01:34:43.981033] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:43.981069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:43.981089] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-27 01:34:43.983384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-27 01:34:43.992698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-27 01:34:43.993050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:43.993277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:43.993306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-27 01:34:43.993324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.358 [2024-07-27 01:34:43.993491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.358 [2024-07-27 01:34:43.993695] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:43.993720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:43.993738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-27 01:34:43.996212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-27 01:34:44.005208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-27 01:34:44.005633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.005796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.005826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-27 01:34:44.005844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.358 [2024-07-27 01:34:44.005956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.358 [2024-07-27 01:34:44.006175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:44.006200] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:44.006217] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-27 01:34:44.008473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-27 01:34:44.017803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-27 01:34:44.018151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.018380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.018437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-27 01:34:44.018456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.358 [2024-07-27 01:34:44.018621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.358 [2024-07-27 01:34:44.018809] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:44.018833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:44.018850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-27 01:34:44.021205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-27 01:34:44.030252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-27 01:34:44.030646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.031032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.031104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-27 01:34:44.031123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.358 [2024-07-27 01:34:44.031288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.358 [2024-07-27 01:34:44.031476] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:44.031500] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:44.031517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-27 01:34:44.033913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-27 01:34:44.042801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-27 01:34:44.043184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.043403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.043449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-27 01:34:44.043467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.358 [2024-07-27 01:34:44.043615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.358 [2024-07-27 01:34:44.043784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:44.043809] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:44.043826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.358 [2024-07-27 01:34:44.046162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.358 [2024-07-27 01:34:44.055345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.358 [2024-07-27 01:34:44.055741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.056001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.358 [2024-07-27 01:34:44.056029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.358 [2024-07-27 01:34:44.056048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.358 [2024-07-27 01:34:44.056207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.358 [2024-07-27 01:34:44.056341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.358 [2024-07-27 01:34:44.056366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.358 [2024-07-27 01:34:44.056383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.359 [2024-07-27 01:34:44.058476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.359 [2024-07-27 01:34:44.068003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.359 [2024-07-27 01:34:44.068367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.359 [2024-07-27 01:34:44.068663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.359 [2024-07-27 01:34:44.068689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.359 [2024-07-27 01:34:44.068719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.359 [2024-07-27 01:34:44.068859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.359 [2024-07-27 01:34:44.069011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.359 [2024-07-27 01:34:44.069035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.359 [2024-07-27 01:34:44.069052] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.359 [2024-07-27 01:34:44.071445] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.359 [2024-07-27 01:34:44.080611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.359 [2024-07-27 01:34:44.080979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.359 [2024-07-27 01:34:44.081214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.359 [2024-07-27 01:34:44.081242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.359 [2024-07-27 01:34:44.081259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.359 [2024-07-27 01:34:44.081462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.359 [2024-07-27 01:34:44.081614] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.359 [2024-07-27 01:34:44.081639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.359 [2024-07-27 01:34:44.081655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.359 [2024-07-27 01:34:44.084021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.359 [2024-07-27 01:34:44.093206] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.359 [2024-07-27 01:34:44.093606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.359 [2024-07-27 01:34:44.093793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.359 [2024-07-27 01:34:44.093822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.359 [2024-07-27 01:34:44.093841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.359 [2024-07-27 01:34:44.093988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.359 [2024-07-27 01:34:44.094168] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.359 [2024-07-27 01:34:44.094194] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.359 [2024-07-27 01:34:44.094210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.359 [2024-07-27 01:34:44.096519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.359 [2024-07-27 01:34:44.105719] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.359 [2024-07-27 01:34:44.106118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.359 [2024-07-27 01:34:44.106345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.359 [2024-07-27 01:34:44.106395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.359 [2024-07-27 01:34:44.106414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.359 [2024-07-27 01:34:44.106579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.359 [2024-07-27 01:34:44.106713] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.359 [2024-07-27 01:34:44.106737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.359 [2024-07-27 01:34:44.106753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.359 [2024-07-27 01:34:44.109045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.618 [2024-07-27 01:34:44.118403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.618 [2024-07-27 01:34:44.118752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.118951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.118986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.618 [2024-07-27 01:34:44.119006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.618 [2024-07-27 01:34:44.119201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.618 [2024-07-27 01:34:44.119353] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.618 [2024-07-27 01:34:44.119378] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.618 [2024-07-27 01:34:44.119395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.618 [2024-07-27 01:34:44.121776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.618 [2024-07-27 01:34:44.130873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.618 [2024-07-27 01:34:44.131313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.131620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.131666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.618 [2024-07-27 01:34:44.131684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.618 [2024-07-27 01:34:44.131886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.618 [2024-07-27 01:34:44.132020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.618 [2024-07-27 01:34:44.132044] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.618 [2024-07-27 01:34:44.132071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.618 [2024-07-27 01:34:44.134417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.618 [2024-07-27 01:34:44.143636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.618 [2024-07-27 01:34:44.144130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.144427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.144452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.618 [2024-07-27 01:34:44.144468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.618 [2024-07-27 01:34:44.144638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.618 [2024-07-27 01:34:44.144840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.618 [2024-07-27 01:34:44.144865] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.618 [2024-07-27 01:34:44.144882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.618 [2024-07-27 01:34:44.147379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.618 [2024-07-27 01:34:44.156144] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.618 [2024-07-27 01:34:44.156490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.156667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.156693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.618 [2024-07-27 01:34:44.156718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.618 [2024-07-27 01:34:44.156885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.618 [2024-07-27 01:34:44.157102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.618 [2024-07-27 01:34:44.157128] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.618 [2024-07-27 01:34:44.157145] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.618 [2024-07-27 01:34:44.159474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.618 [2024-07-27 01:34:44.168745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.618 [2024-07-27 01:34:44.169069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.169287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.169311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.618 [2024-07-27 01:34:44.169327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.618 [2024-07-27 01:34:44.169524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.618 [2024-07-27 01:34:44.169720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.618 [2024-07-27 01:34:44.169745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.618 [2024-07-27 01:34:44.169762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.618 [2024-07-27 01:34:44.172098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.618 [2024-07-27 01:34:44.181399] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.618 [2024-07-27 01:34:44.181779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.181980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.618 [2024-07-27 01:34:44.182011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.618 [2024-07-27 01:34:44.182030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.618 [2024-07-27 01:34:44.182188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.618 [2024-07-27 01:34:44.182341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.618 [2024-07-27 01:34:44.182365] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.618 [2024-07-27 01:34:44.182381] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.618 [2024-07-27 01:34:44.184637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.194018] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.619 [2024-07-27 01:34:44.194439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.194657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.194706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.619 [2024-07-27 01:34:44.194725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.619 [2024-07-27 01:34:44.194950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.619 [2024-07-27 01:34:44.195113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.619 [2024-07-27 01:34:44.195139] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.619 [2024-07-27 01:34:44.195156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.619 [2024-07-27 01:34:44.197555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.206646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.619 [2024-07-27 01:34:44.207035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.207198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.207228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.619 [2024-07-27 01:34:44.207246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.619 [2024-07-27 01:34:44.207376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.619 [2024-07-27 01:34:44.207508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.619 [2024-07-27 01:34:44.207533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.619 [2024-07-27 01:34:44.207550] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.619 [2024-07-27 01:34:44.209899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.218994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.619 [2024-07-27 01:34:44.219373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.219578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.219603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.619 [2024-07-27 01:34:44.219620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.619 [2024-07-27 01:34:44.219763] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.619 [2024-07-27 01:34:44.219948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.619 [2024-07-27 01:34:44.219972] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.619 [2024-07-27 01:34:44.219989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.619 [2024-07-27 01:34:44.222419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.231514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.619 [2024-07-27 01:34:44.231890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.232068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.232097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.619 [2024-07-27 01:34:44.232116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.619 [2024-07-27 01:34:44.232353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.619 [2024-07-27 01:34:44.232563] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.619 [2024-07-27 01:34:44.232588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.619 [2024-07-27 01:34:44.232605] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.619 [2024-07-27 01:34:44.234894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.243926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.619 [2024-07-27 01:34:44.244280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.244466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.244493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.619 [2024-07-27 01:34:44.244511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.619 [2024-07-27 01:34:44.244700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.619 [2024-07-27 01:34:44.244877] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.619 [2024-07-27 01:34:44.244903] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.619 [2024-07-27 01:34:44.244919] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.619 [2024-07-27 01:34:44.247133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.256355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.619 [2024-07-27 01:34:44.256844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.257085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.257112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.619 [2024-07-27 01:34:44.257128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.619 [2024-07-27 01:34:44.257323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.619 [2024-07-27 01:34:44.257488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.619 [2024-07-27 01:34:44.257513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.619 [2024-07-27 01:34:44.257530] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.619 [2024-07-27 01:34:44.259752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.268808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.619 [2024-07-27 01:34:44.269167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.269363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.269392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.619 [2024-07-27 01:34:44.269411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.619 [2024-07-27 01:34:44.269593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.619 [2024-07-27 01:34:44.269727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.619 [2024-07-27 01:34:44.269757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.619 [2024-07-27 01:34:44.269775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.619 [2024-07-27 01:34:44.271951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.281378] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.619 [2024-07-27 01:34:44.281715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.281959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.281984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.619 [2024-07-27 01:34:44.282001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.619 [2024-07-27 01:34:44.282174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.619 [2024-07-27 01:34:44.282295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.619 [2024-07-27 01:34:44.282317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.619 [2024-07-27 01:34:44.282332] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.619 [2024-07-27 01:34:44.284868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.293821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.619 [2024-07-27 01:34:44.294191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.294347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.619 [2024-07-27 01:34:44.294373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.619 [2024-07-27 01:34:44.294405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.619 [2024-07-27 01:34:44.294591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.619 [2024-07-27 01:34:44.294772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.619 [2024-07-27 01:34:44.294797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.619 [2024-07-27 01:34:44.294813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.619 [2024-07-27 01:34:44.296948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.619 [2024-07-27 01:34:44.306488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.620 [2024-07-27 01:34:44.306882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.307123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.307151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.620 [2024-07-27 01:34:44.307168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.620 [2024-07-27 01:34:44.307269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.620 [2024-07-27 01:34:44.307392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.620 [2024-07-27 01:34:44.307417] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.620 [2024-07-27 01:34:44.307439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.620 [2024-07-27 01:34:44.309841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.620 [2024-07-27 01:34:44.319225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.620 [2024-07-27 01:34:44.319626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.319840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.319881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.620 [2024-07-27 01:34:44.319897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.620 [2024-07-27 01:34:44.320021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.620 [2024-07-27 01:34:44.320204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.620 [2024-07-27 01:34:44.320229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.620 [2024-07-27 01:34:44.320245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.620 [2024-07-27 01:34:44.322625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.620 [2024-07-27 01:34:44.331642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.620 [2024-07-27 01:34:44.332018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.332225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.332252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.620 [2024-07-27 01:34:44.332269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.620 [2024-07-27 01:34:44.332428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.620 [2024-07-27 01:34:44.332659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.620 [2024-07-27 01:34:44.332684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.620 [2024-07-27 01:34:44.332701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.620 [2024-07-27 01:34:44.335113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.620 [2024-07-27 01:34:44.344127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.620 [2024-07-27 01:34:44.344509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.345320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.345356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.620 [2024-07-27 01:34:44.345375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.620 [2024-07-27 01:34:44.345579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.620 [2024-07-27 01:34:44.345786] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.620 [2024-07-27 01:34:44.345810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.620 [2024-07-27 01:34:44.345827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.620 [2024-07-27 01:34:44.348136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.620 [2024-07-27 01:34:44.356674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.620 [2024-07-27 01:34:44.357120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.357307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.357336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.620 [2024-07-27 01:34:44.357355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.620 [2024-07-27 01:34:44.357522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.620 [2024-07-27 01:34:44.357711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.620 [2024-07-27 01:34:44.357735] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.620 [2024-07-27 01:34:44.357751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.620 [2024-07-27 01:34:44.360115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.620 [2024-07-27 01:34:44.369126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.620 [2024-07-27 01:34:44.369537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.369757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.620 [2024-07-27 01:34:44.369784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.620 [2024-07-27 01:34:44.369816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.620 [2024-07-27 01:34:44.369928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.620 [2024-07-27 01:34:44.370155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.620 [2024-07-27 01:34:44.370178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.620 [2024-07-27 01:34:44.370194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.620 [2024-07-27 01:34:44.372428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.879 [2024-07-27 01:34:44.381654] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.879 [2024-07-27 01:34:44.382014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.879 [2024-07-27 01:34:44.382217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.879 [2024-07-27 01:34:44.382244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.879 [2024-07-27 01:34:44.382260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.879 [2024-07-27 01:34:44.382432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.879 [2024-07-27 01:34:44.382602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.879 [2024-07-27 01:34:44.382627] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.879 [2024-07-27 01:34:44.382644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.879 [2024-07-27 01:34:44.385004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.879 [2024-07-27 01:34:44.394584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.879 [2024-07-27 01:34:44.394937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.879 [2024-07-27 01:34:44.395167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.879 [2024-07-27 01:34:44.395194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.879 [2024-07-27 01:34:44.395211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.879 [2024-07-27 01:34:44.395359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.879 [2024-07-27 01:34:44.395507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.879 [2024-07-27 01:34:44.395531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.395548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.397892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.880 [2024-07-27 01:34:44.406992] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.880 [2024-07-27 01:34:44.407398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.407656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.407702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.880 [2024-07-27 01:34:44.407720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.880 [2024-07-27 01:34:44.407886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.880 [2024-07-27 01:34:44.408019] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.880 [2024-07-27 01:34:44.408043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.408070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.410363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.880 [2024-07-27 01:34:44.419302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.880 [2024-07-27 01:34:44.419702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.419881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.419923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.880 [2024-07-27 01:34:44.419942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.880 [2024-07-27 01:34:44.420133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.880 [2024-07-27 01:34:44.420283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.880 [2024-07-27 01:34:44.420305] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.420319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.422767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.880 [2024-07-27 01:34:44.431956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.880 [2024-07-27 01:34:44.432359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.432576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.432604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.880 [2024-07-27 01:34:44.432622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.880 [2024-07-27 01:34:44.432799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.880 [2024-07-27 01:34:44.432963] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.880 [2024-07-27 01:34:44.432986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.433002] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.435414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.880 [2024-07-27 01:34:44.444458] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.880 [2024-07-27 01:34:44.444830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.445019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.445045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.880 [2024-07-27 01:34:44.445069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.880 [2024-07-27 01:34:44.445219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.880 [2024-07-27 01:34:44.445412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.880 [2024-07-27 01:34:44.445434] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.445448] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.447567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.880 [2024-07-27 01:34:44.456690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.880 [2024-07-27 01:34:44.457020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.457240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.457267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.880 [2024-07-27 01:34:44.457283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.880 [2024-07-27 01:34:44.457486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.880 [2024-07-27 01:34:44.457692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.880 [2024-07-27 01:34:44.457717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.457733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.460031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.880 [2024-07-27 01:34:44.469443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.880 [2024-07-27 01:34:44.469807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.469984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.470015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.880 [2024-07-27 01:34:44.470032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.880 [2024-07-27 01:34:44.470205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.880 [2024-07-27 01:34:44.470417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.880 [2024-07-27 01:34:44.470442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.470458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.472515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.880 [2024-07-27 01:34:44.481935] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.880 [2024-07-27 01:34:44.482445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.482651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.482677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.880 [2024-07-27 01:34:44.482693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.880 [2024-07-27 01:34:44.482857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.880 [2024-07-27 01:34:44.483028] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.880 [2024-07-27 01:34:44.483053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.483081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.485566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.880 [2024-07-27 01:34:44.494526] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.880 [2024-07-27 01:34:44.494889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.495096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.495123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.880 [2024-07-27 01:34:44.495139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.880 [2024-07-27 01:34:44.495319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.880 [2024-07-27 01:34:44.495543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.880 [2024-07-27 01:34:44.495568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.495585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.497765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.880 [2024-07-27 01:34:44.507252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.880 [2024-07-27 01:34:44.507650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.507901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.880 [2024-07-27 01:34:44.507947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.880 [2024-07-27 01:34:44.507971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.880 [2024-07-27 01:34:44.508129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.880 [2024-07-27 01:34:44.508299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.880 [2024-07-27 01:34:44.508323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.880 [2024-07-27 01:34:44.508339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.880 [2024-07-27 01:34:44.510748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.519923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.881 [2024-07-27 01:34:44.520253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.520473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.520501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.881 [2024-07-27 01:34:44.520519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.881 [2024-07-27 01:34:44.520648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.881 [2024-07-27 01:34:44.520764] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.881 [2024-07-27 01:34:44.520788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.881 [2024-07-27 01:34:44.520804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.881 [2024-07-27 01:34:44.523095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.532626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.881 [2024-07-27 01:34:44.533041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.533224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.533253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.881 [2024-07-27 01:34:44.533271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.881 [2024-07-27 01:34:44.533419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.881 [2024-07-27 01:34:44.533588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.881 [2024-07-27 01:34:44.533612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.881 [2024-07-27 01:34:44.533629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.881 [2024-07-27 01:34:44.536015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.545258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.881 [2024-07-27 01:34:44.545669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.545945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.545991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.881 [2024-07-27 01:34:44.546010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.881 [2024-07-27 01:34:44.546173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.881 [2024-07-27 01:34:44.546379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.881 [2024-07-27 01:34:44.546403] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.881 [2024-07-27 01:34:44.546419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.881 [2024-07-27 01:34:44.548791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.557985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.881 [2024-07-27 01:34:44.558350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.558513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.558537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.881 [2024-07-27 01:34:44.558569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.881 [2024-07-27 01:34:44.558723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.881 [2024-07-27 01:34:44.558942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.881 [2024-07-27 01:34:44.558967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.881 [2024-07-27 01:34:44.558983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.881 [2024-07-27 01:34:44.561262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.570579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.881 [2024-07-27 01:34:44.570987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.571210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.571240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.881 [2024-07-27 01:34:44.571258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.881 [2024-07-27 01:34:44.571405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.881 [2024-07-27 01:34:44.571525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.881 [2024-07-27 01:34:44.571549] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.881 [2024-07-27 01:34:44.571566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.881 [2024-07-27 01:34:44.573965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.583346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.881 [2024-07-27 01:34:44.583756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.583975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.584003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.881 [2024-07-27 01:34:44.584022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.881 [2024-07-27 01:34:44.584182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.881 [2024-07-27 01:34:44.584359] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.881 [2024-07-27 01:34:44.584384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.881 [2024-07-27 01:34:44.584400] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.881 [2024-07-27 01:34:44.586714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.596003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.881 [2024-07-27 01:34:44.596364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.596588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.596634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.881 [2024-07-27 01:34:44.596653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.881 [2024-07-27 01:34:44.596819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.881 [2024-07-27 01:34:44.596972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.881 [2024-07-27 01:34:44.596996] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.881 [2024-07-27 01:34:44.597013] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.881 [2024-07-27 01:34:44.599263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.608541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.881 [2024-07-27 01:34:44.609022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.609227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.609257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.881 [2024-07-27 01:34:44.609275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.881 [2024-07-27 01:34:44.609405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.881 [2024-07-27 01:34:44.609603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.881 [2024-07-27 01:34:44.609627] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.881 [2024-07-27 01:34:44.609644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.881 [2024-07-27 01:34:44.611844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.621203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.881 [2024-07-27 01:34:44.621573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.621855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.881 [2024-07-27 01:34:44.621901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.881 [2024-07-27 01:34:44.621919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.881 [2024-07-27 01:34:44.622153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.881 [2024-07-27 01:34:44.622331] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.881 [2024-07-27 01:34:44.622361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.881 [2024-07-27 01:34:44.622390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.881 [2024-07-27 01:34:44.624808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.881 [2024-07-27 01:34:44.633643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.882 [2024-07-27 01:34:44.634001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.882 [2024-07-27 01:34:44.634231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.882 [2024-07-27 01:34:44.634261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:52.882 [2024-07-27 01:34:44.634279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:52.882 [2024-07-27 01:34:44.634445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:52.882 [2024-07-27 01:34:44.634651] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.882 [2024-07-27 01:34:44.634676] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.882 [2024-07-27 01:34:44.634693] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.142 [2024-07-27 01:34:44.636930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.142 [2024-07-27 01:34:44.646179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.142 [2024-07-27 01:34:44.646576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.646776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.646805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.142 [2024-07-27 01:34:44.646823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.142 [2024-07-27 01:34:44.647043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.142 [2024-07-27 01:34:44.647206] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.142 [2024-07-27 01:34:44.647231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.142 [2024-07-27 01:34:44.647248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.142 [2024-07-27 01:34:44.649627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.142 [2024-07-27 01:34:44.658837] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.142 [2024-07-27 01:34:44.659144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.659360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.659385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.142 [2024-07-27 01:34:44.659401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.142 [2024-07-27 01:34:44.659613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.142 [2024-07-27 01:34:44.659762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.142 [2024-07-27 01:34:44.659786] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.142 [2024-07-27 01:34:44.659808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.142 [2024-07-27 01:34:44.662088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.142 [2024-07-27 01:34:44.671442] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.142 [2024-07-27 01:34:44.671836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.672093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.672120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.142 [2024-07-27 01:34:44.672137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.142 [2024-07-27 01:34:44.672358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.142 [2024-07-27 01:34:44.672557] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.142 [2024-07-27 01:34:44.672582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.142 [2024-07-27 01:34:44.672598] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.142 [2024-07-27 01:34:44.674979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.142 [2024-07-27 01:34:44.683882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.142 [2024-07-27 01:34:44.684226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.684482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.684529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.142 [2024-07-27 01:34:44.684548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.142 [2024-07-27 01:34:44.684732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.142 [2024-07-27 01:34:44.684901] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.142 [2024-07-27 01:34:44.684926] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.142 [2024-07-27 01:34:44.684942] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.142 [2024-07-27 01:34:44.686966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.142 [2024-07-27 01:34:44.696474] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.142 [2024-07-27 01:34:44.696824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.696983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.142 [2024-07-27 01:34:44.697011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.142 [2024-07-27 01:34:44.697030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.142 [2024-07-27 01:34:44.697224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.142 [2024-07-27 01:34:44.697394] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.142 [2024-07-27 01:34:44.697419] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.142 [2024-07-27 01:34:44.697436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.142 [2024-07-27 01:34:44.699570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.142 [2024-07-27 01:34:44.709144] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.709552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.709835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.709860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.143 [2024-07-27 01:34:44.709876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.143 [2024-07-27 01:34:44.710048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.143 [2024-07-27 01:34:44.710220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.143 [2024-07-27 01:34:44.710245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.143 [2024-07-27 01:34:44.710261] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.143 [2024-07-27 01:34:44.712751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.143 [2024-07-27 01:34:44.721679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.722121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.722314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.722342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.143 [2024-07-27 01:34:44.722360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.143 [2024-07-27 01:34:44.722544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.143 [2024-07-27 01:34:44.722713] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.143 [2024-07-27 01:34:44.722738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.143 [2024-07-27 01:34:44.722754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.143 [2024-07-27 01:34:44.725097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.143 [2024-07-27 01:34:44.734253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.734697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.734898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.734938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.143 [2024-07-27 01:34:44.734955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.143 [2024-07-27 01:34:44.735140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.143 [2024-07-27 01:34:44.735294] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.143 [2024-07-27 01:34:44.735318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.143 [2024-07-27 01:34:44.735334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.143 [2024-07-27 01:34:44.737552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.143 [2024-07-27 01:34:44.746734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.747148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.747348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.747377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.143 [2024-07-27 01:34:44.747395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.143 [2024-07-27 01:34:44.747579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.143 [2024-07-27 01:34:44.747731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.143 [2024-07-27 01:34:44.747755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.143 [2024-07-27 01:34:44.747771] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.143 [2024-07-27 01:34:44.750047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.143 [2024-07-27 01:34:44.759320] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.759735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.759966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.760014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.143 [2024-07-27 01:34:44.760033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.143 [2024-07-27 01:34:44.760262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.143 [2024-07-27 01:34:44.760487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.143 [2024-07-27 01:34:44.760511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.143 [2024-07-27 01:34:44.760527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.143 [2024-07-27 01:34:44.762906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.143 [2024-07-27 01:34:44.771887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.772276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.772495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.772521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.143 [2024-07-27 01:34:44.772536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.143 [2024-07-27 01:34:44.772706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.143 [2024-07-27 01:34:44.772860] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.143 [2024-07-27 01:34:44.772885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.143 [2024-07-27 01:34:44.772901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.143 [2024-07-27 01:34:44.775180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.143 [2024-07-27 01:34:44.784615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.785010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.785259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.785286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.143 [2024-07-27 01:34:44.785302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.143 [2024-07-27 01:34:44.785497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.143 [2024-07-27 01:34:44.785686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.143 [2024-07-27 01:34:44.785710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.143 [2024-07-27 01:34:44.785726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.143 [2024-07-27 01:34:44.788096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.143 [2024-07-27 01:34:44.797330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.797728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.797925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.797953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.143 [2024-07-27 01:34:44.797972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.143 [2024-07-27 01:34:44.798174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.143 [2024-07-27 01:34:44.798319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.143 [2024-07-27 01:34:44.798367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.143 [2024-07-27 01:34:44.798382] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.143 [2024-07-27 01:34:44.800658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.143 [2024-07-27 01:34:44.809840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.810195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.810341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.810384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.143 [2024-07-27 01:34:44.810400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.143 [2024-07-27 01:34:44.810626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.143 [2024-07-27 01:34:44.810796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.143 [2024-07-27 01:34:44.810820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.143 [2024-07-27 01:34:44.810836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.143 [2024-07-27 01:34:44.813011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.143 [2024-07-27 01:34:44.822529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.143 [2024-07-27 01:34:44.823011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.143 [2024-07-27 01:34:44.823236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.823267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.144 [2024-07-27 01:34:44.823284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.144 [2024-07-27 01:34:44.823466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.144 [2024-07-27 01:34:44.823654] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.144 [2024-07-27 01:34:44.823678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.144 [2024-07-27 01:34:44.823695] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.144 [2024-07-27 01:34:44.825947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.144 [2024-07-27 01:34:44.835025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.144 [2024-07-27 01:34:44.835387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.835636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.835667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.144 [2024-07-27 01:34:44.835686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.144 [2024-07-27 01:34:44.835871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.144 [2024-07-27 01:34:44.836068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.144 [2024-07-27 01:34:44.836092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.144 [2024-07-27 01:34:44.836109] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.144 [2024-07-27 01:34:44.838372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.144 [2024-07-27 01:34:44.847577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.144 [2024-07-27 01:34:44.847931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.848156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.848184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.144 [2024-07-27 01:34:44.848216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.144 [2024-07-27 01:34:44.848381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.144 [2024-07-27 01:34:44.848569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.144 [2024-07-27 01:34:44.848594] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.144 [2024-07-27 01:34:44.848611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.144 [2024-07-27 01:34:44.851095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.144 [2024-07-27 01:34:44.860007] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.144 [2024-07-27 01:34:44.860425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.860677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.860724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.144 [2024-07-27 01:34:44.860749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.144 [2024-07-27 01:34:44.860915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.144 [2024-07-27 01:34:44.861099] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.144 [2024-07-27 01:34:44.861125] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.144 [2024-07-27 01:34:44.861142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.144 [2024-07-27 01:34:44.863633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.144 [2024-07-27 01:34:44.872626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.144 [2024-07-27 01:34:44.873027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.873209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.873237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.144 [2024-07-27 01:34:44.873255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.144 [2024-07-27 01:34:44.873386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.144 [2024-07-27 01:34:44.873572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.144 [2024-07-27 01:34:44.873598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.144 [2024-07-27 01:34:44.873614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.144 [2024-07-27 01:34:44.875839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.144 [2024-07-27 01:34:44.885301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.144 [2024-07-27 01:34:44.885681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.885931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.144 [2024-07-27 01:34:44.885961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.144 [2024-07-27 01:34:44.885980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.144 [2024-07-27 01:34:44.886106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.144 [2024-07-27 01:34:44.886257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.144 [2024-07-27 01:34:44.886281] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.144 [2024-07-27 01:34:44.886297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.144 [2024-07-27 01:34:44.888571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.144 [2024-07-27 01:34:44.897883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.144 [2024-07-27 01:34:44.898231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.898529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.898594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.406 [2024-07-27 01:34:44.898613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.406 [2024-07-27 01:34:44.898771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.406 [2024-07-27 01:34:44.898940] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.406 [2024-07-27 01:34:44.898965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.406 [2024-07-27 01:34:44.898982] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.406 [2024-07-27 01:34:44.901309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.406 [2024-07-27 01:34:44.910541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.406 [2024-07-27 01:34:44.910893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.911113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.911141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.406 [2024-07-27 01:34:44.911157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.406 [2024-07-27 01:34:44.911324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.406 [2024-07-27 01:34:44.911456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.406 [2024-07-27 01:34:44.911481] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.406 [2024-07-27 01:34:44.911498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.406 [2024-07-27 01:34:44.913937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.406 [2024-07-27 01:34:44.923108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.406 [2024-07-27 01:34:44.923672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.924080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.924133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.406 [2024-07-27 01:34:44.924151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.406 [2024-07-27 01:34:44.924317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.406 [2024-07-27 01:34:44.924468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.406 [2024-07-27 01:34:44.924491] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.406 [2024-07-27 01:34:44.924507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.406 [2024-07-27 01:34:44.926743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.406 [2024-07-27 01:34:44.935683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.406 [2024-07-27 01:34:44.936119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.936302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.936327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.406 [2024-07-27 01:34:44.936344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.406 [2024-07-27 01:34:44.936512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.406 [2024-07-27 01:34:44.936686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.406 [2024-07-27 01:34:44.936712] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.406 [2024-07-27 01:34:44.936729] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.406 [2024-07-27 01:34:44.938954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.406 [2024-07-27 01:34:44.948245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.406 [2024-07-27 01:34:44.948603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.948910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.948963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.406 [2024-07-27 01:34:44.948981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.406 [2024-07-27 01:34:44.949162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.406 [2024-07-27 01:34:44.949331] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.406 [2024-07-27 01:34:44.949355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.406 [2024-07-27 01:34:44.949372] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.406 [2024-07-27 01:34:44.951593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.406 [2024-07-27 01:34:44.960978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.406 [2024-07-27 01:34:44.961373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.961579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.961609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.406 [2024-07-27 01:34:44.961628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.406 [2024-07-27 01:34:44.961740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.406 [2024-07-27 01:34:44.961854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.406 [2024-07-27 01:34:44.961878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.406 [2024-07-27 01:34:44.961894] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.406 [2024-07-27 01:34:44.964326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.406 [2024-07-27 01:34:44.973760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.406 [2024-07-27 01:34:44.974179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.974373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.974401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.406 [2024-07-27 01:34:44.974419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.406 [2024-07-27 01:34:44.974602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.406 [2024-07-27 01:34:44.974789] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.406 [2024-07-27 01:34:44.974819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.406 [2024-07-27 01:34:44.974836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.406 [2024-07-27 01:34:44.977341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.406 [2024-07-27 01:34:44.986492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.406 [2024-07-27 01:34:44.986872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.987077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.987106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.406 [2024-07-27 01:34:44.987124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.406 [2024-07-27 01:34:44.987272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.406 [2024-07-27 01:34:44.987386] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.406 [2024-07-27 01:34:44.987410] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.406 [2024-07-27 01:34:44.987426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.406 [2024-07-27 01:34:44.989574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.406 [2024-07-27 01:34:44.998923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.406 [2024-07-27 01:34:44.999292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.999483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.406 [2024-07-27 01:34:44.999511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.406 [2024-07-27 01:34:44.999529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.406 [2024-07-27 01:34:44.999641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.406 [2024-07-27 01:34:44.999810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.406 [2024-07-27 01:34:44.999835] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.406 [2024-07-27 01:34:44.999852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.406 [2024-07-27 01:34:45.002177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.011610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.012032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.012236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.012265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.407 [2024-07-27 01:34:45.012283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.407 [2024-07-27 01:34:45.012433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.407 [2024-07-27 01:34:45.012619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.407 [2024-07-27 01:34:45.012645] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.407 [2024-07-27 01:34:45.012666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.407 [2024-07-27 01:34:45.014930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.024126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.024475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.024809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.024858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.407 [2024-07-27 01:34:45.024877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.407 [2024-07-27 01:34:45.025044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.407 [2024-07-27 01:34:45.025207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.407 [2024-07-27 01:34:45.025232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.407 [2024-07-27 01:34:45.025248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.407 [2024-07-27 01:34:45.027471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.036683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.037184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.037382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.037412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.407 [2024-07-27 01:34:45.037431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.407 [2024-07-27 01:34:45.037598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.407 [2024-07-27 01:34:45.037820] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.407 [2024-07-27 01:34:45.037846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.407 [2024-07-27 01:34:45.037862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.407 [2024-07-27 01:34:45.040347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.049266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.049680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.049866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.049895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.407 [2024-07-27 01:34:45.049914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.407 [2024-07-27 01:34:45.050095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.407 [2024-07-27 01:34:45.050300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.407 [2024-07-27 01:34:45.050326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.407 [2024-07-27 01:34:45.050343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.407 [2024-07-27 01:34:45.052482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.061873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.062282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.062606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.062667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.407 [2024-07-27 01:34:45.062685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.407 [2024-07-27 01:34:45.062852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.407 [2024-07-27 01:34:45.063056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.407 [2024-07-27 01:34:45.063095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.407 [2024-07-27 01:34:45.063112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.407 [2024-07-27 01:34:45.065460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.074530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.074918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.075143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.075203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.407 [2024-07-27 01:34:45.075222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.407 [2024-07-27 01:34:45.075352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.407 [2024-07-27 01:34:45.075486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.407 [2024-07-27 01:34:45.075510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.407 [2024-07-27 01:34:45.075525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.407 [2024-07-27 01:34:45.077910] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.087042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.087481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.087702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.087728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.407 [2024-07-27 01:34:45.087759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.407 [2024-07-27 01:34:45.087928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.407 [2024-07-27 01:34:45.088100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.407 [2024-07-27 01:34:45.088126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.407 [2024-07-27 01:34:45.088144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.407 [2024-07-27 01:34:45.090277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.099519] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.099901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.100233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.100283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.407 [2024-07-27 01:34:45.100301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.407 [2024-07-27 01:34:45.100485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.407 [2024-07-27 01:34:45.100618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.407 [2024-07-27 01:34:45.100642] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.407 [2024-07-27 01:34:45.100659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.407 [2024-07-27 01:34:45.103064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.111982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.112352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.112542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.112569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.407 [2024-07-27 01:34:45.112587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.407 [2024-07-27 01:34:45.112735] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.407 [2024-07-27 01:34:45.112940] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.407 [2024-07-27 01:34:45.112966] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.407 [2024-07-27 01:34:45.112983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.407 [2024-07-27 01:34:45.115379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.407 [2024-07-27 01:34:45.124571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.407 [2024-07-27 01:34:45.125033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.407 [2024-07-27 01:34:45.125264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.408 [2024-07-27 01:34:45.125293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.408 [2024-07-27 01:34:45.125313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.408 [2024-07-27 01:34:45.125497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.408 [2024-07-27 01:34:45.125684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.408 [2024-07-27 01:34:45.125710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.408 [2024-07-27 01:34:45.125726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.408 [2024-07-27 01:34:45.128040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.408 [2024-07-27 01:34:45.137012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.408 [2024-07-27 01:34:45.137368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.408 [2024-07-27 01:34:45.137668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.408 [2024-07-27 01:34:45.137693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.408 [2024-07-27 01:34:45.137709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.408 [2024-07-27 01:34:45.137901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.408 [2024-07-27 01:34:45.138052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.408 [2024-07-27 01:34:45.138092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.408 [2024-07-27 01:34:45.138110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.408 [2024-07-27 01:34:45.140368] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.408 [2024-07-27 01:34:45.149529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.408 [2024-07-27 01:34:45.150042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.408 [2024-07-27 01:34:45.150291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.408 [2024-07-27 01:34:45.150316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.408 [2024-07-27 01:34:45.150332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.408 [2024-07-27 01:34:45.150575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.408 [2024-07-27 01:34:45.150749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.408 [2024-07-27 01:34:45.150775] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.408 [2024-07-27 01:34:45.150792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.408 [2024-07-27 01:34:45.153314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.670 [2024-07-27 01:34:45.162139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.670 [2024-07-27 01:34:45.162471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-07-27 01:34:45.162808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-07-27 01:34:45.162871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.670 [2024-07-27 01:34:45.162889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.670 [2024-07-27 01:34:45.163089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.670 [2024-07-27 01:34:45.163276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.670 [2024-07-27 01:34:45.163300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.670 [2024-07-27 01:34:45.163317] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.670 [2024-07-27 01:34:45.165629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.670 [2024-07-27 01:34:45.174717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.670 [2024-07-27 01:34:45.175164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-07-27 01:34:45.175550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-07-27 01:34:45.175612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.670 [2024-07-27 01:34:45.175631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.670 [2024-07-27 01:34:45.175814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.670 [2024-07-27 01:34:45.175965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.670 [2024-07-27 01:34:45.175989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.670 [2024-07-27 01:34:45.176005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.670 [2024-07-27 01:34:45.178166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.670 [2024-07-27 01:34:45.187354] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.670 [2024-07-27 01:34:45.187797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-07-27 01:34:45.187967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-07-27 01:34:45.187995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.670 [2024-07-27 01:34:45.188012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.670 [2024-07-27 01:34:45.188210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.670 [2024-07-27 01:34:45.188397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.670 [2024-07-27 01:34:45.188422] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.670 [2024-07-27 01:34:45.188439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.670 [2024-07-27 01:34:45.190823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.670 [2024-07-27 01:34:45.199984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.670 [2024-07-27 01:34:45.200358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-07-27 01:34:45.200548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-07-27 01:34:45.200575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.670 [2024-07-27 01:34:45.200592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.670 [2024-07-27 01:34:45.200764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.670 [2024-07-27 01:34:45.200976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.670 [2024-07-27 01:34:45.201002] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.670 [2024-07-27 01:34:45.201019] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.670 [2024-07-27 01:34:45.203325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.670 [2024-07-27 01:34:45.212604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.670 [2024-07-27 01:34:45.212982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.670 [2024-07-27 01:34:45.213211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.213241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-07-27 01:34:45.213265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.671 [2024-07-27 01:34:45.213451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.671 [2024-07-27 01:34:45.213601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.671 [2024-07-27 01:34:45.213626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.671 [2024-07-27 01:34:45.213643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.671 [2024-07-27 01:34:45.216051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.671 [2024-07-27 01:34:45.225055] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.671 [2024-07-27 01:34:45.225456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.225773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.225826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-07-27 01:34:45.225844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.671 [2024-07-27 01:34:45.226009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.671 [2024-07-27 01:34:45.226209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.671 [2024-07-27 01:34:45.226233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.671 [2024-07-27 01:34:45.226250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.671 [2024-07-27 01:34:45.228470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.671 [2024-07-27 01:34:45.237546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.671 [2024-07-27 01:34:45.237889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.238084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.238112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-07-27 01:34:45.238130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.671 [2024-07-27 01:34:45.238295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.671 [2024-07-27 01:34:45.238446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.671 [2024-07-27 01:34:45.238469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.671 [2024-07-27 01:34:45.238486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.671 [2024-07-27 01:34:45.240870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.671 [2024-07-27 01:34:45.250138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.671 [2024-07-27 01:34:45.250492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.250692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.250731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-07-27 01:34:45.250747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.671 [2024-07-27 01:34:45.250894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.671 [2024-07-27 01:34:45.251077] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.671 [2024-07-27 01:34:45.251103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.671 [2024-07-27 01:34:45.251120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.671 [2024-07-27 01:34:45.253521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.671 [2024-07-27 01:34:45.262739] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.671 [2024-07-27 01:34:45.263143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.263381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.263422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-07-27 01:34:45.263439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.671 [2024-07-27 01:34:45.263622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.671 [2024-07-27 01:34:45.263737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.671 [2024-07-27 01:34:45.263763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.671 [2024-07-27 01:34:45.263781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.671 [2024-07-27 01:34:45.266106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.671 [2024-07-27 01:34:45.275484] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.671 [2024-07-27 01:34:45.275859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.276069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.276094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-07-27 01:34:45.276110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.671 [2024-07-27 01:34:45.276308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.671 [2024-07-27 01:34:45.276495] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.671 [2024-07-27 01:34:45.276521] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.671 [2024-07-27 01:34:45.276537] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.671 [2024-07-27 01:34:45.278780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.671 [2024-07-27 01:34:45.287983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.671 [2024-07-27 01:34:45.288352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.288572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.288602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-07-27 01:34:45.288620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.671 [2024-07-27 01:34:45.288768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.671 [2024-07-27 01:34:45.288943] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.671 [2024-07-27 01:34:45.288969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.671 [2024-07-27 01:34:45.288985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.671 [2024-07-27 01:34:45.291166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.671 [2024-07-27 01:34:45.300659] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.671 [2024-07-27 01:34:45.301021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.301227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.301256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-07-27 01:34:45.301275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.671 [2024-07-27 01:34:45.301440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.671 [2024-07-27 01:34:45.301626] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.671 [2024-07-27 01:34:45.301651] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.671 [2024-07-27 01:34:45.301667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.671 [2024-07-27 01:34:45.303978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.671 [2024-07-27 01:34:45.313029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.671 [2024-07-27 01:34:45.313498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.313742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.313768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.671 [2024-07-27 01:34:45.313784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.671 [2024-07-27 01:34:45.313920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.671 [2024-07-27 01:34:45.314142] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.671 [2024-07-27 01:34:45.314168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.671 [2024-07-27 01:34:45.314185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.671 [2024-07-27 01:34:45.316622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.671 [2024-07-27 01:34:45.325386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.671 [2024-07-27 01:34:45.325761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.325980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.671 [2024-07-27 01:34:45.326010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-07-27 01:34:45.326028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.672 [2024-07-27 01:34:45.326189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.672 [2024-07-27 01:34:45.326340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.672 [2024-07-27 01:34:45.326369] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.672 [2024-07-27 01:34:45.326386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.672 [2024-07-27 01:34:45.328701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.672 [2024-07-27 01:34:45.337819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.672 [2024-07-27 01:34:45.338178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.338383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.338410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-07-27 01:34:45.338427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.672 [2024-07-27 01:34:45.338623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.672 [2024-07-27 01:34:45.338810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.672 [2024-07-27 01:34:45.338836] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.672 [2024-07-27 01:34:45.338853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.672 [2024-07-27 01:34:45.341355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.672 [2024-07-27 01:34:45.350430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.672 [2024-07-27 01:34:45.350813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.351040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.351081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-07-27 01:34:45.351101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.672 [2024-07-27 01:34:45.351286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.672 [2024-07-27 01:34:45.351473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.672 [2024-07-27 01:34:45.351499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.672 [2024-07-27 01:34:45.351515] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.672 [2024-07-27 01:34:45.353826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.672 [2024-07-27 01:34:45.362974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.672 [2024-07-27 01:34:45.363366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.363675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.363731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-07-27 01:34:45.363750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.672 [2024-07-27 01:34:45.363881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.672 [2024-07-27 01:34:45.364013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.672 [2024-07-27 01:34:45.364039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.672 [2024-07-27 01:34:45.364073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.672 [2024-07-27 01:34:45.366281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.672 [2024-07-27 01:34:45.375545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.672 [2024-07-27 01:34:45.375945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.376123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.376152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-07-27 01:34:45.376170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.672 [2024-07-27 01:34:45.376336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.672 [2024-07-27 01:34:45.376506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.672 [2024-07-27 01:34:45.376532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.672 [2024-07-27 01:34:45.376549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.672 [2024-07-27 01:34:45.378898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.672 [2024-07-27 01:34:45.388171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.672 [2024-07-27 01:34:45.388593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.388761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.388789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-07-27 01:34:45.388807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.672 [2024-07-27 01:34:45.388954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.672 [2024-07-27 01:34:45.389148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.672 [2024-07-27 01:34:45.389172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.672 [2024-07-27 01:34:45.389187] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.672 [2024-07-27 01:34:45.391466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.672 [2024-07-27 01:34:45.400755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.672 [2024-07-27 01:34:45.401121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.401283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.401310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-07-27 01:34:45.401326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.672 [2024-07-27 01:34:45.401524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.672 [2024-07-27 01:34:45.401666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.672 [2024-07-27 01:34:45.401687] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.672 [2024-07-27 01:34:45.401716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.672 [2024-07-27 01:34:45.404146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.672 [2024-07-27 01:34:45.413459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.672 [2024-07-27 01:34:45.413880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.414070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.672 [2024-07-27 01:34:45.414096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.672 [2024-07-27 01:34:45.414113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.672 [2024-07-27 01:34:45.414263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.672 [2024-07-27 01:34:45.414445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.672 [2024-07-27 01:34:45.414471] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.672 [2024-07-27 01:34:45.414487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.672 [2024-07-27 01:34:45.416979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.672 [2024-07-27 01:34:45.425796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.672 [2024-07-27 01:34:45.426147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-07-27 01:34:45.426321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-07-27 01:34:45.426348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-07-27 01:34:45.426366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.933 [2024-07-27 01:34:45.426522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.933 [2024-07-27 01:34:45.426674] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.933 [2024-07-27 01:34:45.426699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.933 [2024-07-27 01:34:45.426714] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.933 [2024-07-27 01:34:45.429135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.933 [2024-07-27 01:34:45.438399] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.933 [2024-07-27 01:34:45.438877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-07-27 01:34:45.439122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-07-27 01:34:45.439150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-07-27 01:34:45.439168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.933 [2024-07-27 01:34:45.439349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.933 [2024-07-27 01:34:45.439558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.933 [2024-07-27 01:34:45.439582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.933 [2024-07-27 01:34:45.439610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.933 [2024-07-27 01:34:45.442055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.933 [2024-07-27 01:34:45.450998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.933 [2024-07-27 01:34:45.451322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-07-27 01:34:45.451547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-07-27 01:34:45.451578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-07-27 01:34:45.451596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.933 [2024-07-27 01:34:45.451740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.933 [2024-07-27 01:34:45.451882] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.933 [2024-07-27 01:34:45.451908] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.933 [2024-07-27 01:34:45.451925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.933 [2024-07-27 01:34:45.454286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.933 [2024-07-27 01:34:45.463638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.933 [2024-07-27 01:34:45.464034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-07-27 01:34:45.464200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.933 [2024-07-27 01:34:45.464226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.933 [2024-07-27 01:34:45.464242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.464392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.464524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.934 [2024-07-27 01:34:45.464548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.934 [2024-07-27 01:34:45.464578] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.934 [2024-07-27 01:34:45.466853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.934 [2024-07-27 01:34:45.476127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.934 [2024-07-27 01:34:45.476537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.476740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.476783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-07-27 01:34:45.476802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.476969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.477143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.934 [2024-07-27 01:34:45.477167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.934 [2024-07-27 01:34:45.477183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.934 [2024-07-27 01:34:45.479471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.934 [2024-07-27 01:34:45.488880] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.934 [2024-07-27 01:34:45.489307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.489620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.489661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-07-27 01:34:45.489703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.489871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.490082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.934 [2024-07-27 01:34:45.490132] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.934 [2024-07-27 01:34:45.490147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.934 [2024-07-27 01:34:45.492498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.934 [2024-07-27 01:34:45.501156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.934 [2024-07-27 01:34:45.501607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.501843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.501891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-07-27 01:34:45.501910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.502092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.502279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.934 [2024-07-27 01:34:45.502300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.934 [2024-07-27 01:34:45.502314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.934 [2024-07-27 01:34:45.504535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.934 [2024-07-27 01:34:45.513634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.934 [2024-07-27 01:34:45.514080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.514279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.514308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-07-27 01:34:45.514336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.514518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.514687] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.934 [2024-07-27 01:34:45.514712] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.934 [2024-07-27 01:34:45.514729] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.934 [2024-07-27 01:34:45.516939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.934 [2024-07-27 01:34:45.526166] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.934 [2024-07-27 01:34:45.526523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.526743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.526778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-07-27 01:34:45.526797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.526981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.527208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.934 [2024-07-27 01:34:45.527230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.934 [2024-07-27 01:34:45.527244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.934 [2024-07-27 01:34:45.529575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.934 [2024-07-27 01:34:45.538683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.934 [2024-07-27 01:34:45.539079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.539320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.539361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-07-27 01:34:45.539378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.539606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.539793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.934 [2024-07-27 01:34:45.539819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.934 [2024-07-27 01:34:45.539836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.934 [2024-07-27 01:34:45.542141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.934 [2024-07-27 01:34:45.551320] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.934 [2024-07-27 01:34:45.551777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.552020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.552065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-07-27 01:34:45.552083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.552217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.552374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.934 [2024-07-27 01:34:45.552399] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.934 [2024-07-27 01:34:45.552416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.934 [2024-07-27 01:34:45.554790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.934 [2024-07-27 01:34:45.564055] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.934 [2024-07-27 01:34:45.564481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.564725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.564772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-07-27 01:34:45.564797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.564964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.565090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.934 [2024-07-27 01:34:45.565121] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.934 [2024-07-27 01:34:45.565138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.934 [2024-07-27 01:34:45.567464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.934 [2024-07-27 01:34:45.576489] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.934 [2024-07-27 01:34:45.576881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.577114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.934 [2024-07-27 01:34:45.577142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.934 [2024-07-27 01:34:45.577175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.934 [2024-07-27 01:34:45.577337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.934 [2024-07-27 01:34:45.577489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.935 [2024-07-27 01:34:45.577514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.935 [2024-07-27 01:34:45.577531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.935 [2024-07-27 01:34:45.579747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.935 [2024-07-27 01:34:45.589145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.935 [2024-07-27 01:34:45.589463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.589659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.589700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.935 [2024-07-27 01:34:45.589716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.935 [2024-07-27 01:34:45.589894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.935 [2024-07-27 01:34:45.590094] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.935 [2024-07-27 01:34:45.590119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.935 [2024-07-27 01:34:45.590136] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.935 [2024-07-27 01:34:45.592499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.935 [2024-07-27 01:34:45.601808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.935 [2024-07-27 01:34:45.602223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.602439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.602487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.935 [2024-07-27 01:34:45.602506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.935 [2024-07-27 01:34:45.602696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.935 [2024-07-27 01:34:45.602847] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.935 [2024-07-27 01:34:45.602872] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.935 [2024-07-27 01:34:45.602889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.935 [2024-07-27 01:34:45.605239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.935 [2024-07-27 01:34:45.614234] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.935 [2024-07-27 01:34:45.614605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.614838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.614885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.935 [2024-07-27 01:34:45.614903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.935 [2024-07-27 01:34:45.615114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.935 [2024-07-27 01:34:45.615268] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.935 [2024-07-27 01:34:45.615292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.935 [2024-07-27 01:34:45.615308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.935 [2024-07-27 01:34:45.617606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.935 [2024-07-27 01:34:45.626988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.935 [2024-07-27 01:34:45.627434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.627617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.627643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.935 [2024-07-27 01:34:45.627659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.935 [2024-07-27 01:34:45.627817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.935 [2024-07-27 01:34:45.627989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.935 [2024-07-27 01:34:45.628014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.935 [2024-07-27 01:34:45.628030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.935 [2024-07-27 01:34:45.630194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.935 [2024-07-27 01:34:45.639581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.935 [2024-07-27 01:34:45.640043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.640280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.640308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.935 [2024-07-27 01:34:45.640327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.935 [2024-07-27 01:34:45.640493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.935 [2024-07-27 01:34:45.640686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.935 [2024-07-27 01:34:45.640711] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.935 [2024-07-27 01:34:45.640728] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.935 [2024-07-27 01:34:45.642968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.935 [2024-07-27 01:34:45.652184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.935 [2024-07-27 01:34:45.652574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.652901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.652930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.935 [2024-07-27 01:34:45.652948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.935 [2024-07-27 01:34:45.653161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.935 [2024-07-27 01:34:45.653350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.935 [2024-07-27 01:34:45.653374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.935 [2024-07-27 01:34:45.653391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.935 [2024-07-27 01:34:45.655701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.935 [2024-07-27 01:34:45.664612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.935 [2024-07-27 01:34:45.665000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.665215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.665243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.935 [2024-07-27 01:34:45.665260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.935 [2024-07-27 01:34:45.665407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.935 [2024-07-27 01:34:45.665560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.935 [2024-07-27 01:34:45.665584] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.935 [2024-07-27 01:34:45.665601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.935 [2024-07-27 01:34:45.667885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.935 [2024-07-27 01:34:45.677435] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.935 [2024-07-27 01:34:45.677813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.678036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.935 [2024-07-27 01:34:45.678076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:53.935 [2024-07-27 01:34:45.678097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:53.935 [2024-07-27 01:34:45.678263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:53.935 [2024-07-27 01:34:45.678378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.935 [2024-07-27 01:34:45.678407] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.935 [2024-07-27 01:34:45.678423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.935 [2024-07-27 01:34:45.680717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.195 [2024-07-27 01:34:45.690350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.195 [2024-07-27 01:34:45.690754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.690954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.690984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.195 [2024-07-27 01:34:45.691002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.195 [2024-07-27 01:34:45.691161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.195 [2024-07-27 01:34:45.691348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.195 [2024-07-27 01:34:45.691373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.195 [2024-07-27 01:34:45.691390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.195 [2024-07-27 01:34:45.693831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.195 [2024-07-27 01:34:45.702707] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.195 [2024-07-27 01:34:45.703082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.703297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.703334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.195 [2024-07-27 01:34:45.703365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.195 [2024-07-27 01:34:45.703564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.195 [2024-07-27 01:34:45.703735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.195 [2024-07-27 01:34:45.703761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.195 [2024-07-27 01:34:45.703778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.195 [2024-07-27 01:34:45.706096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.195 [2024-07-27 01:34:45.715229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.195 [2024-07-27 01:34:45.715586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.715787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.715835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.195 [2024-07-27 01:34:45.715854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.195 [2024-07-27 01:34:45.715985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.195 [2024-07-27 01:34:45.716148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.195 [2024-07-27 01:34:45.716172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.195 [2024-07-27 01:34:45.716194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.195 [2024-07-27 01:34:45.718582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.195 [2024-07-27 01:34:45.727869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.195 [2024-07-27 01:34:45.728278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.728634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.728684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.195 [2024-07-27 01:34:45.728702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.195 [2024-07-27 01:34:45.728850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.195 [2024-07-27 01:34:45.729036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.195 [2024-07-27 01:34:45.729074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.195 [2024-07-27 01:34:45.729093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.195 [2024-07-27 01:34:45.731277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.195 [2024-07-27 01:34:45.740567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.195 [2024-07-27 01:34:45.741011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.741236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.195 [2024-07-27 01:34:45.741265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.195 [2024-07-27 01:34:45.741284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.195 [2024-07-27 01:34:45.741522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.195 [2024-07-27 01:34:45.741691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.195 [2024-07-27 01:34:45.741717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.741733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.744072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.753184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.753585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.753821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.753850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.196 [2024-07-27 01:34:45.753869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.196 [2024-07-27 01:34:45.754085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.196 [2024-07-27 01:34:45.754254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.196 [2024-07-27 01:34:45.754279] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.754296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.756578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.765682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.766071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.766242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.766270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.196 [2024-07-27 01:34:45.766288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.196 [2024-07-27 01:34:45.766491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.196 [2024-07-27 01:34:45.766678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.196 [2024-07-27 01:34:45.766703] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.766720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.769015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.778470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.778816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.779010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.779039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.196 [2024-07-27 01:34:45.779069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.196 [2024-07-27 01:34:45.779220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.196 [2024-07-27 01:34:45.779425] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.196 [2024-07-27 01:34:45.779449] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.779465] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.781834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.790934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.791340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.791532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.791561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.196 [2024-07-27 01:34:45.791579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.196 [2024-07-27 01:34:45.791727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.196 [2024-07-27 01:34:45.791950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.196 [2024-07-27 01:34:45.791976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.791992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.794353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.803379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.803767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.803995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.804044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.196 [2024-07-27 01:34:45.804076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.196 [2024-07-27 01:34:45.804247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.196 [2024-07-27 01:34:45.804434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.196 [2024-07-27 01:34:45.804459] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.804475] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.806856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.815876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.816256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.816503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.816550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.196 [2024-07-27 01:34:45.816568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.196 [2024-07-27 01:34:45.816734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.196 [2024-07-27 01:34:45.816923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.196 [2024-07-27 01:34:45.816948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.816964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.819281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.828404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.828833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.829073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.829113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.196 [2024-07-27 01:34:45.829131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.196 [2024-07-27 01:34:45.829295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.196 [2024-07-27 01:34:45.829438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.196 [2024-07-27 01:34:45.829464] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.829480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.831776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.840867] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.841292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.841492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.841538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.196 [2024-07-27 01:34:45.841557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.196 [2024-07-27 01:34:45.841723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.196 [2024-07-27 01:34:45.841891] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.196 [2024-07-27 01:34:45.841917] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.841934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.844258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.853525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.853985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.854180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.196 [2024-07-27 01:34:45.854210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.196 [2024-07-27 01:34:45.854229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.196 [2024-07-27 01:34:45.854376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.196 [2024-07-27 01:34:45.854546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.196 [2024-07-27 01:34:45.854570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.196 [2024-07-27 01:34:45.854587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.196 [2024-07-27 01:34:45.856883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.196 [2024-07-27 01:34:45.866175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.196 [2024-07-27 01:34:45.866625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.866894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.866920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.197 [2024-07-27 01:34:45.866936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.197 [2024-07-27 01:34:45.867136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.197 [2024-07-27 01:34:45.867317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.197 [2024-07-27 01:34:45.867342] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.197 [2024-07-27 01:34:45.867358] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.197 [2024-07-27 01:34:45.869653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.197 [2024-07-27 01:34:45.878730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.197 [2024-07-27 01:34:45.879140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.879312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.879346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.197 [2024-07-27 01:34:45.879365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.197 [2024-07-27 01:34:45.879495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.197 [2024-07-27 01:34:45.879664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.197 [2024-07-27 01:34:45.879688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.197 [2024-07-27 01:34:45.879705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.197 [2024-07-27 01:34:45.882031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.197 [2024-07-27 01:34:45.891251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.197 [2024-07-27 01:34:45.891701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.891902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.891927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.197 [2024-07-27 01:34:45.891943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.197 [2024-07-27 01:34:45.892114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.197 [2024-07-27 01:34:45.892284] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.197 [2024-07-27 01:34:45.892308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.197 [2024-07-27 01:34:45.892325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.197 [2024-07-27 01:34:45.894755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.197 [2024-07-27 01:34:45.903613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.197 [2024-07-27 01:34:45.903954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.904173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.904200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.197 [2024-07-27 01:34:45.904216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.197 [2024-07-27 01:34:45.904397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.197 [2024-07-27 01:34:45.904567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.197 [2024-07-27 01:34:45.904591] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.197 [2024-07-27 01:34:45.904608] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.197 [2024-07-27 01:34:45.906922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.197 [2024-07-27 01:34:45.916246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.197 [2024-07-27 01:34:45.916667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.916842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.916867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.197 [2024-07-27 01:34:45.916887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.197 [2024-07-27 01:34:45.917015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.197 [2024-07-27 01:34:45.917152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.197 [2024-07-27 01:34:45.917177] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.197 [2024-07-27 01:34:45.917194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.197 [2024-07-27 01:34:45.919322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.197 [2024-07-27 01:34:45.928857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.197 [2024-07-27 01:34:45.929205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.929422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.929473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.197 [2024-07-27 01:34:45.929492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.197 [2024-07-27 01:34:45.929658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.197 [2024-07-27 01:34:45.929845] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.197 [2024-07-27 01:34:45.929869] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.197 [2024-07-27 01:34:45.929886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.197 [2024-07-27 01:34:45.932133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.197 [2024-07-27 01:34:45.941335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.197 [2024-07-27 01:34:45.941669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.941955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.197 [2024-07-27 01:34:45.942007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.197 [2024-07-27 01:34:45.942025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.197 [2024-07-27 01:34:45.942217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.197 [2024-07-27 01:34:45.942405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.197 [2024-07-27 01:34:45.942430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.197 [2024-07-27 01:34:45.942446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.197 [2024-07-27 01:34:45.944826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.456 [2024-07-27 01:34:45.953804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.456 [2024-07-27 01:34:45.954181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-07-27 01:34:45.954373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-07-27 01:34:45.954421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.456 [2024-07-27 01:34:45.954440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.456 [2024-07-27 01:34:45.954630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.456 [2024-07-27 01:34:45.954799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.456 [2024-07-27 01:34:45.954824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.456 [2024-07-27 01:34:45.954841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:45.957394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:45.966473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:45.966850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:45.967078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:45.967107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.457 [2024-07-27 01:34:45.967125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.457 [2024-07-27 01:34:45.967309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.457 [2024-07-27 01:34:45.967424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.457 [2024-07-27 01:34:45.967448] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.457 [2024-07-27 01:34:45.967464] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:45.969629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:45.979084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:45.979420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:45.979617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:45.979643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.457 [2024-07-27 01:34:45.979660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.457 [2024-07-27 01:34:45.979803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.457 [2024-07-27 01:34:45.979972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.457 [2024-07-27 01:34:45.979997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.457 [2024-07-27 01:34:45.980013] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:45.982412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:45.991551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:45.991928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:45.992180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:45.992209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.457 [2024-07-27 01:34:45.992227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.457 [2024-07-27 01:34:45.992428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.457 [2024-07-27 01:34:45.992586] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.457 [2024-07-27 01:34:45.992610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.457 [2024-07-27 01:34:45.992627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:45.994955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:46.004073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:46.004443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.004662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.004691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.457 [2024-07-27 01:34:46.004709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.457 [2024-07-27 01:34:46.004911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.457 [2024-07-27 01:34:46.005149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.457 [2024-07-27 01:34:46.005175] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.457 [2024-07-27 01:34:46.005192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:46.007640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:46.016440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:46.016882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.017135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.017176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.457 [2024-07-27 01:34:46.017192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.457 [2024-07-27 01:34:46.017387] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.457 [2024-07-27 01:34:46.017575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.457 [2024-07-27 01:34:46.017599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.457 [2024-07-27 01:34:46.017616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:46.019960] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:46.029064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:46.029432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.029725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.029778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.457 [2024-07-27 01:34:46.029797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.457 [2024-07-27 01:34:46.029944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.457 [2024-07-27 01:34:46.030105] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.457 [2024-07-27 01:34:46.030135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.457 [2024-07-27 01:34:46.030153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:46.032606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:46.041583] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:46.042032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.042272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.042301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.457 [2024-07-27 01:34:46.042319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.457 [2024-07-27 01:34:46.042484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.457 [2024-07-27 01:34:46.042672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.457 [2024-07-27 01:34:46.042697] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.457 [2024-07-27 01:34:46.042713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:46.045209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:46.054255] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:46.054675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.054850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.054875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.457 [2024-07-27 01:34:46.054891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.457 [2024-07-27 01:34:46.055101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.457 [2024-07-27 01:34:46.055282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.457 [2024-07-27 01:34:46.055307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.457 [2024-07-27 01:34:46.055324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:46.057542] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:46.066780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:46.067168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.067389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-07-27 01:34:46.067419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.457 [2024-07-27 01:34:46.067437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.457 [2024-07-27 01:34:46.067584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.457 [2024-07-27 01:34:46.067754] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.457 [2024-07-27 01:34:46.067779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.457 [2024-07-27 01:34:46.067804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.457 [2024-07-27 01:34:46.070123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.457 [2024-07-27 01:34:46.079407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.457 [2024-07-27 01:34:46.079769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.079962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.079990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.080009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.458 [2024-07-27 01:34:46.080186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.458 [2024-07-27 01:34:46.080338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.458 [2024-07-27 01:34:46.080362] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.458 [2024-07-27 01:34:46.080379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.458 [2024-07-27 01:34:46.082506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.458 [2024-07-27 01:34:46.092076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.458 [2024-07-27 01:34:46.092458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.092658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.092683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.092698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.458 [2024-07-27 01:34:46.092869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.458 [2024-07-27 01:34:46.093070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.458 [2024-07-27 01:34:46.093096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.458 [2024-07-27 01:34:46.093112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.458 [2024-07-27 01:34:46.095205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.458 [2024-07-27 01:34:46.104663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.458 [2024-07-27 01:34:46.105070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.105265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.105296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.105314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.458 [2024-07-27 01:34:46.105445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.458 [2024-07-27 01:34:46.105632] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.458 [2024-07-27 01:34:46.105657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.458 [2024-07-27 01:34:46.105674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.458 [2024-07-27 01:34:46.107853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.458 [2024-07-27 01:34:46.117314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.458 [2024-07-27 01:34:46.117667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.117922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.117969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.117987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.458 [2024-07-27 01:34:46.118146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.458 [2024-07-27 01:34:46.118334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.458 [2024-07-27 01:34:46.118359] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.458 [2024-07-27 01:34:46.118376] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.458 [2024-07-27 01:34:46.120809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.458 [2024-07-27 01:34:46.130067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.458 [2024-07-27 01:34:46.130482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.130701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.130727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.130743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.458 [2024-07-27 01:34:46.130940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.458 [2024-07-27 01:34:46.131155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.458 [2024-07-27 01:34:46.131181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.458 [2024-07-27 01:34:46.131197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.458 [2024-07-27 01:34:46.133342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.458 [2024-07-27 01:34:46.142580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.458 [2024-07-27 01:34:46.142946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.143144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.143174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.143193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.458 [2024-07-27 01:34:46.143322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.458 [2024-07-27 01:34:46.143511] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.458 [2024-07-27 01:34:46.143536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.458 [2024-07-27 01:34:46.143552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.458 [2024-07-27 01:34:46.145862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.458 [2024-07-27 01:34:46.155150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.458 [2024-07-27 01:34:46.155723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.156121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.156151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.156169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.458 [2024-07-27 01:34:46.156299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.458 [2024-07-27 01:34:46.156505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.458 [2024-07-27 01:34:46.156530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.458 [2024-07-27 01:34:46.156546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.458 [2024-07-27 01:34:46.158871] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.458 [2024-07-27 01:34:46.167811] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.458 [2024-07-27 01:34:46.168232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.168494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.168545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.168564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.458 [2024-07-27 01:34:46.168711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.458 [2024-07-27 01:34:46.168917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.458 [2024-07-27 01:34:46.168942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.458 [2024-07-27 01:34:46.168959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.458 [2024-07-27 01:34:46.171330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.458 [2024-07-27 01:34:46.179966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.458 [2024-07-27 01:34:46.180384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.180713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.180765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.180783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.458 [2024-07-27 01:34:46.180948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.458 [2024-07-27 01:34:46.181128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.458 [2024-07-27 01:34:46.181153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.458 [2024-07-27 01:34:46.181170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.458 [2024-07-27 01:34:46.183407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.458 [2024-07-27 01:34:46.192400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.458 [2024-07-27 01:34:46.192787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.193014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-07-27 01:34:46.193043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.458 [2024-07-27 01:34:46.193071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.459 [2024-07-27 01:34:46.193222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.459 [2024-07-27 01:34:46.193428] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.459 [2024-07-27 01:34:46.193452] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.459 [2024-07-27 01:34:46.193469] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.459 [2024-07-27 01:34:46.195797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.459 [2024-07-27 01:34:46.205153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.459 [2024-07-27 01:34:46.205540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-07-27 01:34:46.205731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-07-27 01:34:46.205778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.459 [2024-07-27 01:34:46.205797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.459 [2024-07-27 01:34:46.205999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.459 [2024-07-27 01:34:46.206220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.459 [2024-07-27 01:34:46.206246] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.459 [2024-07-27 01:34:46.206262] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.459 [2024-07-27 01:34:46.208426] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.718 [2024-07-27 01:34:46.217606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.718 [2024-07-27 01:34:46.218139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.218327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.218356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.718 [2024-07-27 01:34:46.218374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.718 [2024-07-27 01:34:46.218559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.718 [2024-07-27 01:34:46.218710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.718 [2024-07-27 01:34:46.218735] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.718 [2024-07-27 01:34:46.218751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.718 [2024-07-27 01:34:46.221273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.718 [2024-07-27 01:34:46.230041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.718 [2024-07-27 01:34:46.230416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.230641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.230694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.718 [2024-07-27 01:34:46.230714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.718 [2024-07-27 01:34:46.230880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.718 [2024-07-27 01:34:46.231098] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.718 [2024-07-27 01:34:46.231124] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.718 [2024-07-27 01:34:46.231140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.718 [2024-07-27 01:34:46.233288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.718 [2024-07-27 01:34:46.242532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.718 [2024-07-27 01:34:46.242881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.243111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.243141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.718 [2024-07-27 01:34:46.243160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.718 [2024-07-27 01:34:46.243361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.718 [2024-07-27 01:34:46.243550] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.718 [2024-07-27 01:34:46.243575] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.718 [2024-07-27 01:34:46.243591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.718 [2024-07-27 01:34:46.245953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.718 [2024-07-27 01:34:46.255186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.718 [2024-07-27 01:34:46.255607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.255802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.255831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.718 [2024-07-27 01:34:46.255849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.718 [2024-07-27 01:34:46.256080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.718 [2024-07-27 01:34:46.256195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.718 [2024-07-27 01:34:46.256220] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.718 [2024-07-27 01:34:46.256236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.718 [2024-07-27 01:34:46.258669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.718 [2024-07-27 01:34:46.267624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.718 [2024-07-27 01:34:46.268158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.268416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.718 [2024-07-27 01:34:46.268457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.268478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.719 [2024-07-27 01:34:46.268658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.719 [2024-07-27 01:34:46.268792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.719 [2024-07-27 01:34:46.268817] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.719 [2024-07-27 01:34:46.268833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.719 [2024-07-27 01:34:46.271222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.719 [2024-07-27 01:34:46.280184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.719 [2024-07-27 01:34:46.280599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.280924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.280970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.280989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.719 [2024-07-27 01:34:46.281165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.719 [2024-07-27 01:34:46.281317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.719 [2024-07-27 01:34:46.281341] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.719 [2024-07-27 01:34:46.281358] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.719 [2024-07-27 01:34:46.283901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.719 [2024-07-27 01:34:46.292946] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.719 [2024-07-27 01:34:46.293369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.293630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.293670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.293686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.719 [2024-07-27 01:34:46.293851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.719 [2024-07-27 01:34:46.294056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.719 [2024-07-27 01:34:46.294091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.719 [2024-07-27 01:34:46.294107] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.719 [2024-07-27 01:34:46.296417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.719 [2024-07-27 01:34:46.305466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.719 [2024-07-27 01:34:46.305974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.306234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.306260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.306275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.719 [2024-07-27 01:34:46.306457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.719 [2024-07-27 01:34:46.306610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.719 [2024-07-27 01:34:46.306634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.719 [2024-07-27 01:34:46.306651] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.719 [2024-07-27 01:34:46.308911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.719 [2024-07-27 01:34:46.317976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.719 [2024-07-27 01:34:46.318336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.318642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.318667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.318698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.719 [2024-07-27 01:34:46.318879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.719 [2024-07-27 01:34:46.319071] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.719 [2024-07-27 01:34:46.319096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.719 [2024-07-27 01:34:46.319113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.719 [2024-07-27 01:34:46.321365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.719 [2024-07-27 01:34:46.330643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.719 [2024-07-27 01:34:46.331025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.331257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.331283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.331299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.719 [2024-07-27 01:34:46.331495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.719 [2024-07-27 01:34:46.331665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.719 [2024-07-27 01:34:46.331689] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.719 [2024-07-27 01:34:46.331705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.719 [2024-07-27 01:34:46.334077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.719 [2024-07-27 01:34:46.343317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.719 [2024-07-27 01:34:46.343803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.344044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.344082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.344102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.719 [2024-07-27 01:34:46.344303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.719 [2024-07-27 01:34:46.344497] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.719 [2024-07-27 01:34:46.344522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.719 [2024-07-27 01:34:46.344538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.719 [2024-07-27 01:34:46.346827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.719 [2024-07-27 01:34:46.355771] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.719 [2024-07-27 01:34:46.356168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.356346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.356387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.356403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.719 [2024-07-27 01:34:46.356609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.719 [2024-07-27 01:34:46.356779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.719 [2024-07-27 01:34:46.356804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.719 [2024-07-27 01:34:46.356820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.719 [2024-07-27 01:34:46.359408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.719 [2024-07-27 01:34:46.368257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.719 [2024-07-27 01:34:46.368638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.368878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.368903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.368920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.719 [2024-07-27 01:34:46.369078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.719 [2024-07-27 01:34:46.369248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.719 [2024-07-27 01:34:46.369272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.719 [2024-07-27 01:34:46.369288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.719 [2024-07-27 01:34:46.371648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.719 [2024-07-27 01:34:46.380870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.719 [2024-07-27 01:34:46.381212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.381432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.719 [2024-07-27 01:34:46.381457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.719 [2024-07-27 01:34:46.381474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.720 [2024-07-27 01:34:46.381634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.720 [2024-07-27 01:34:46.381822] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.720 [2024-07-27 01:34:46.381852] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.720 [2024-07-27 01:34:46.381869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.720 [2024-07-27 01:34:46.384352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.720 [2024-07-27 01:34:46.393502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.720 [2024-07-27 01:34:46.393942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.394122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.394154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.720 [2024-07-27 01:34:46.394173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.720 [2024-07-27 01:34:46.394376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.720 [2024-07-27 01:34:46.394563] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.720 [2024-07-27 01:34:46.394588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.720 [2024-07-27 01:34:46.394604] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.720 [2024-07-27 01:34:46.396902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.720 [2024-07-27 01:34:46.406081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.720 [2024-07-27 01:34:46.406433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.406759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.406816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.720 [2024-07-27 01:34:46.406834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.720 [2024-07-27 01:34:46.406964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.720 [2024-07-27 01:34:46.407167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.720 [2024-07-27 01:34:46.407193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.720 [2024-07-27 01:34:46.407209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.720 [2024-07-27 01:34:46.409572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.720 [2024-07-27 01:34:46.418747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.720 [2024-07-27 01:34:46.419088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.419377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.419427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.720 [2024-07-27 01:34:46.419446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.720 [2024-07-27 01:34:46.419648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.720 [2024-07-27 01:34:46.419855] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.720 [2024-07-27 01:34:46.419879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.720 [2024-07-27 01:34:46.419901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.720 [2024-07-27 01:34:46.422329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.720 [2024-07-27 01:34:46.431278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.720 [2024-07-27 01:34:46.431689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.432001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.432053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.720 [2024-07-27 01:34:46.432082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.720 [2024-07-27 01:34:46.432285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.720 [2024-07-27 01:34:46.432437] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.720 [2024-07-27 01:34:46.432461] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.720 [2024-07-27 01:34:46.432477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.720 [2024-07-27 01:34:46.434696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.720 [2024-07-27 01:34:46.443928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.720 [2024-07-27 01:34:46.444270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.444609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.444690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.720 [2024-07-27 01:34:46.444709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.720 [2024-07-27 01:34:46.444911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.720 [2024-07-27 01:34:46.445091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.720 [2024-07-27 01:34:46.445116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.720 [2024-07-27 01:34:46.445132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.720 [2024-07-27 01:34:46.447549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.720 [2024-07-27 01:34:46.456426] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.720 [2024-07-27 01:34:46.456806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.457003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.457032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.720 [2024-07-27 01:34:46.457050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.720 [2024-07-27 01:34:46.457210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.720 [2024-07-27 01:34:46.457380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.720 [2024-07-27 01:34:46.457405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.720 [2024-07-27 01:34:46.457421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.720 [2024-07-27 01:34:46.459573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.720 [2024-07-27 01:34:46.469042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.720 [2024-07-27 01:34:46.469440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.469666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.720 [2024-07-27 01:34:46.469690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.720 [2024-07-27 01:34:46.469706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.720 [2024-07-27 01:34:46.469876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.720 [2024-07-27 01:34:46.470010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.720 [2024-07-27 01:34:46.470034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.720 [2024-07-27 01:34:46.470050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.720 [2024-07-27 01:34:46.472351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.980 [2024-07-27 01:34:46.481653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.980 [2024-07-27 01:34:46.482046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.482242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.482273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.980 [2024-07-27 01:34:46.482291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.980 [2024-07-27 01:34:46.482440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.980 [2024-07-27 01:34:46.482574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.980 [2024-07-27 01:34:46.482599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.980 [2024-07-27 01:34:46.482615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.980 [2024-07-27 01:34:46.484964] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.980 [2024-07-27 01:34:46.493991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.980 [2024-07-27 01:34:46.494370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.494642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.494693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.980 [2024-07-27 01:34:46.494712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.980 [2024-07-27 01:34:46.494878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.980 [2024-07-27 01:34:46.495012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.980 [2024-07-27 01:34:46.495037] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.980 [2024-07-27 01:34:46.495053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.980 [2024-07-27 01:34:46.497443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.980 [2024-07-27 01:34:46.506616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.980 [2024-07-27 01:34:46.506953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.507153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.507180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.980 [2024-07-27 01:34:46.507196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.980 [2024-07-27 01:34:46.507362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.980 [2024-07-27 01:34:46.507514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.980 [2024-07-27 01:34:46.507538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.980 [2024-07-27 01:34:46.507554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.980 [2024-07-27 01:34:46.509967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.980 [2024-07-27 01:34:46.519137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.980 [2024-07-27 01:34:46.519485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.519678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.519707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.980 [2024-07-27 01:34:46.519725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.980 [2024-07-27 01:34:46.519909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.980 [2024-07-27 01:34:46.520139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.980 [2024-07-27 01:34:46.520162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.980 [2024-07-27 01:34:46.520176] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.980 [2024-07-27 01:34:46.522423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.980 [2024-07-27 01:34:46.531440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.980 [2024-07-27 01:34:46.531778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.531967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.531993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.980 [2024-07-27 01:34:46.532010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.980 [2024-07-27 01:34:46.532197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.980 [2024-07-27 01:34:46.532349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.980 [2024-07-27 01:34:46.532374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.980 [2024-07-27 01:34:46.532390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.980 [2024-07-27 01:34:46.534804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.980 [2024-07-27 01:34:46.543915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.980 [2024-07-27 01:34:46.544243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.544455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.544483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.980 [2024-07-27 01:34:46.544502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.980 [2024-07-27 01:34:46.544685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.980 [2024-07-27 01:34:46.544872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.980 [2024-07-27 01:34:46.544897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.980 [2024-07-27 01:34:46.544913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.980 [2024-07-27 01:34:46.547104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.980 [2024-07-27 01:34:46.556536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.980 [2024-07-27 01:34:46.556875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.557110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.557137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.980 [2024-07-27 01:34:46.557155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.980 [2024-07-27 01:34:46.557304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.980 [2024-07-27 01:34:46.557503] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.980 [2024-07-27 01:34:46.557528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.980 [2024-07-27 01:34:46.557545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.980 [2024-07-27 01:34:46.559870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.980 [2024-07-27 01:34:46.568922] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.980 [2024-07-27 01:34:46.569252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.569398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.980 [2024-07-27 01:34:46.569424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.980 [2024-07-27 01:34:46.569441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.980 [2024-07-27 01:34:46.569602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.980 [2024-07-27 01:34:46.569771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.980 [2024-07-27 01:34:46.569796] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.980 [2024-07-27 01:34:46.569812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.980 [2024-07-27 01:34:46.572247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.980 [2024-07-27 01:34:46.581449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.980 [2024-07-27 01:34:46.581886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.582115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.582142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.981 [2024-07-27 01:34:46.582159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.981 [2024-07-27 01:34:46.582292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.981 [2024-07-27 01:34:46.582453] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.981 [2024-07-27 01:34:46.582479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.981 [2024-07-27 01:34:46.582495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.981 [2024-07-27 01:34:46.585007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.981 [2024-07-27 01:34:46.594020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.981 [2024-07-27 01:34:46.594358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.594584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.594613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.981 [2024-07-27 01:34:46.594631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.981 [2024-07-27 01:34:46.594779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.981 [2024-07-27 01:34:46.594895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.981 [2024-07-27 01:34:46.594920] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.981 [2024-07-27 01:34:46.594936] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.981 [2024-07-27 01:34:46.597376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.981 [2024-07-27 01:34:46.606815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.981 [2024-07-27 01:34:46.607130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.607278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.607304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.981 [2024-07-27 01:34:46.607321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.981 [2024-07-27 01:34:46.607520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.981 [2024-07-27 01:34:46.607708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.981 [2024-07-27 01:34:46.607732] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.981 [2024-07-27 01:34:46.607749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.981 [2024-07-27 01:34:46.610006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.981 [2024-07-27 01:34:46.619511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.981 [2024-07-27 01:34:46.619900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.620074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.620119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.981 [2024-07-27 01:34:46.620141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.981 [2024-07-27 01:34:46.620242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.981 [2024-07-27 01:34:46.620422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.981 [2024-07-27 01:34:46.620447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.981 [2024-07-27 01:34:46.620463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.981 [2024-07-27 01:34:46.622748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.981 [2024-07-27 01:34:46.632170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.981 [2024-07-27 01:34:46.632514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.632707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.632736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.981 [2024-07-27 01:34:46.632754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.981 [2024-07-27 01:34:46.632901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.981 [2024-07-27 01:34:46.633100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.981 [2024-07-27 01:34:46.633125] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.981 [2024-07-27 01:34:46.633141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.981 [2024-07-27 01:34:46.635502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.981 [2024-07-27 01:34:46.644663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.981 [2024-07-27 01:34:46.645118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.645272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.645298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.981 [2024-07-27 01:34:46.645315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.981 [2024-07-27 01:34:46.645447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.981 [2024-07-27 01:34:46.645593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.981 [2024-07-27 01:34:46.645618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.981 [2024-07-27 01:34:46.645634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.981 [2024-07-27 01:34:46.648078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.981 [2024-07-27 01:34:46.656933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.981 [2024-07-27 01:34:46.657359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.657587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.657634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.981 [2024-07-27 01:34:46.657653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.981 [2024-07-27 01:34:46.657791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.981 [2024-07-27 01:34:46.658015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.981 [2024-07-27 01:34:46.658040] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.981 [2024-07-27 01:34:46.658056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.981 [2024-07-27 01:34:46.660484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.981 [2024-07-27 01:34:46.669701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.981 [2024-07-27 01:34:46.670096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.670329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.670355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.981 [2024-07-27 01:34:46.670387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.981 [2024-07-27 01:34:46.670633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.981 [2024-07-27 01:34:46.670803] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.981 [2024-07-27 01:34:46.670827] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.981 [2024-07-27 01:34:46.670843] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.981 [2024-07-27 01:34:46.673001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.981 [2024-07-27 01:34:46.682162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.981 [2024-07-27 01:34:46.682524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.682722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.682751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.981 [2024-07-27 01:34:46.682769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.981 [2024-07-27 01:34:46.682917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.981 [2024-07-27 01:34:46.683098] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.981 [2024-07-27 01:34:46.683123] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.981 [2024-07-27 01:34:46.683140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.981 [2024-07-27 01:34:46.685412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.981 [2024-07-27 01:34:46.694824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.981 [2024-07-27 01:34:46.695273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.695468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.981 [2024-07-27 01:34:46.695496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.982 [2024-07-27 01:34:46.695514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.982 [2024-07-27 01:34:46.695662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.982 [2024-07-27 01:34:46.695855] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.982 [2024-07-27 01:34:46.695880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.982 [2024-07-27 01:34:46.695896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.982 [2024-07-27 01:34:46.698267] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.982 [2024-07-27 01:34:46.707180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.982 [2024-07-27 01:34:46.707578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.982 [2024-07-27 01:34:46.707778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.982 [2024-07-27 01:34:46.707807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.982 [2024-07-27 01:34:46.707825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.982 [2024-07-27 01:34:46.707973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.982 [2024-07-27 01:34:46.708177] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.982 [2024-07-27 01:34:46.708203] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.982 [2024-07-27 01:34:46.708220] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.982 [2024-07-27 01:34:46.710507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.982 [2024-07-27 01:34:46.719716] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.982 [2024-07-27 01:34:46.720142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.982 [2024-07-27 01:34:46.720345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.982 [2024-07-27 01:34:46.720374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.982 [2024-07-27 01:34:46.720393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.982 [2024-07-27 01:34:46.720522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.982 [2024-07-27 01:34:46.720674] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.982 [2024-07-27 01:34:46.720699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.982 [2024-07-27 01:34:46.720715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.982 [2024-07-27 01:34:46.723194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.982 [2024-07-27 01:34:46.732375] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.982 [2024-07-27 01:34:46.732772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.982 [2024-07-27 01:34:46.732984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.982 [2024-07-27 01:34:46.733013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:54.982 [2024-07-27 01:34:46.733031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:54.982 [2024-07-27 01:34:46.733227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:54.982 [2024-07-27 01:34:46.733380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.982 [2024-07-27 01:34:46.733410] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.982 [2024-07-27 01:34:46.733427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.982 [2024-07-27 01:34:46.735806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.241 [2024-07-27 01:34:46.745188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.241 [2024-07-27 01:34:46.745556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-07-27 01:34:46.745772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.241 [2024-07-27 01:34:46.745798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.242 [2024-07-27 01:34:46.745814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.242 [2024-07-27 01:34:46.746028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.242 [2024-07-27 01:34:46.746214] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.242 [2024-07-27 01:34:46.746239] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.242 [2024-07-27 01:34:46.746255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.242 [2024-07-27 01:34:46.748579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.242 [2024-07-27 01:34:46.757797] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.242 [2024-07-27 01:34:46.758244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.758446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.758481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.242 [2024-07-27 01:34:46.758497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.242 [2024-07-27 01:34:46.758640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.242 [2024-07-27 01:34:46.758807] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.242 [2024-07-27 01:34:46.758833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.242 [2024-07-27 01:34:46.758850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.242 [2024-07-27 01:34:46.761156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 749954 Killed "${NVMF_APP[@]}" "$@" 00:26:55.242 01:34:46 -- host/bdevperf.sh@36 -- # tgt_init 00:26:55.242 01:34:46 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:55.242 01:34:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:55.242 01:34:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:55.242 01:34:46 -- common/autotest_common.sh@10 -- # set +x 00:26:55.242 01:34:46 -- nvmf/common.sh@469 -- # nvmfpid=751064 00:26:55.242 01:34:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:55.242 01:34:46 -- nvmf/common.sh@470 -- # waitforlisten 751064 00:26:55.242 01:34:46 -- common/autotest_common.sh@819 -- # '[' -z 751064 ']' 00:26:55.242 01:34:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.242 01:34:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:55.242 01:34:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.242 01:34:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:55.242 01:34:46 -- common/autotest_common.sh@10 -- # set +x 00:26:55.242 [2024-07-27 01:34:46.770310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.242 [2024-07-27 01:34:46.770654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.770872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.770898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.242 [2024-07-27 01:34:46.770914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.242 [2024-07-27 01:34:46.771080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.242 [2024-07-27 01:34:46.771268] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.242 [2024-07-27 01:34:46.771291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.242 [2024-07-27 01:34:46.771307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.242 [2024-07-27 01:34:46.773306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.242 [2024-07-27 01:34:46.782590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.242 [2024-07-27 01:34:46.782943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.783148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.783175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.242 [2024-07-27 01:34:46.783193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.242 [2024-07-27 01:34:46.783326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.242 [2024-07-27 01:34:46.783513] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.242 [2024-07-27 01:34:46.783533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.242 [2024-07-27 01:34:46.783546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.242 [2024-07-27 01:34:46.785699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.242 [2024-07-27 01:34:46.794764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.242 [2024-07-27 01:34:46.795215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.795390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.795416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.242 [2024-07-27 01:34:46.795433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.242 [2024-07-27 01:34:46.795598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.242 [2024-07-27 01:34:46.795766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.242 [2024-07-27 01:34:46.795786] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.242 [2024-07-27 01:34:46.795801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.242 [2024-07-27 01:34:46.797753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.242 [2024-07-27 01:34:46.806792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.242 [2024-07-27 01:34:46.807202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.807357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.807383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.242 [2024-07-27 01:34:46.807415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.242 [2024-07-27 01:34:46.807574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.242 [2024-07-27 01:34:46.807730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.242 [2024-07-27 01:34:46.807750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.242 [2024-07-27 01:34:46.807764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.242 [2024-07-27 01:34:46.807768] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:55.242 [2024-07-27 01:34:46.807822] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.242 [2024-07-27 01:34:46.809799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.242 [2024-07-27 01:34:46.819127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.242 [2024-07-27 01:34:46.819593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.819774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.819800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.242 [2024-07-27 01:34:46.819817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.242 [2024-07-27 01:34:46.819993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.242 [2024-07-27 01:34:46.820147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.242 [2024-07-27 01:34:46.820170] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.242 [2024-07-27 01:34:46.820185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.242 [2024-07-27 01:34:46.822288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.242 [2024-07-27 01:34:46.831466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.242 [2024-07-27 01:34:46.831802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.832014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.242 [2024-07-27 01:34:46.832040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.242 [2024-07-27 01:34:46.832057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.242 [2024-07-27 01:34:46.832199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.242 [2024-07-27 01:34:46.832335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.242 [2024-07-27 01:34:46.832376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.242 [2024-07-27 01:34:46.832395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.242 [2024-07-27 01:34:46.834474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.242 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.242 [2024-07-27 01:34:46.843661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.242 [2024-07-27 01:34:46.844020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.844244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.844270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.243 [2024-07-27 01:34:46.844287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.243 [2024-07-27 01:34:46.844447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.243 [2024-07-27 01:34:46.844613] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.243 [2024-07-27 01:34:46.844633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.243 [2024-07-27 01:34:46.844647] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.243 [2024-07-27 01:34:46.846567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.243 [2024-07-27 01:34:46.855976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.243 [2024-07-27 01:34:46.856395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.856625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.856654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.243 [2024-07-27 01:34:46.856672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.243 [2024-07-27 01:34:46.856838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.243 [2024-07-27 01:34:46.856989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.243 [2024-07-27 01:34:46.857013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.243 [2024-07-27 01:34:46.857030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.243 [2024-07-27 01:34:46.859429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.243 [2024-07-27 01:34:46.868479] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.243 [2024-07-27 01:34:46.868837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.869004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.869034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.243 [2024-07-27 01:34:46.869052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.243 [2024-07-27 01:34:46.869247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.243 [2024-07-27 01:34:46.869456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.243 [2024-07-27 01:34:46.869481] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.243 [2024-07-27 01:34:46.869503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.243 [2024-07-27 01:34:46.871782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.243 [2024-07-27 01:34:46.878187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:55.243 [2024-07-27 01:34:46.881156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.243 [2024-07-27 01:34:46.881625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.881831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.881860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.243 [2024-07-27 01:34:46.881878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.243 [2024-07-27 01:34:46.881991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.243 [2024-07-27 01:34:46.882174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.243 [2024-07-27 01:34:46.882195] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.243 [2024-07-27 01:34:46.882209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.243 [2024-07-27 01:34:46.884554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.243 [2024-07-27 01:34:46.893530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.243 [2024-07-27 01:34:46.894103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.894346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.894372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.243 [2024-07-27 01:34:46.894410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.243 [2024-07-27 01:34:46.894567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.243 [2024-07-27 01:34:46.894704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.243 [2024-07-27 01:34:46.894729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.243 [2024-07-27 01:34:46.894749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.243 [2024-07-27 01:34:46.897142] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.243 [2024-07-27 01:34:46.906003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.243 [2024-07-27 01:34:46.906438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.906652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.906679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.243 [2024-07-27 01:34:46.906696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.243 [2024-07-27 01:34:46.906887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.243 [2024-07-27 01:34:46.907086] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.243 [2024-07-27 01:34:46.907125] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.243 [2024-07-27 01:34:46.907140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.243 [2024-07-27 01:34:46.909453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.243 [2024-07-27 01:34:46.918470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.243 [2024-07-27 01:34:46.918884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.919073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.919119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.243 [2024-07-27 01:34:46.919137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.243 [2024-07-27 01:34:46.919287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.243 [2024-07-27 01:34:46.919462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.243 [2024-07-27 01:34:46.919488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.243 [2024-07-27 01:34:46.919505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.243 [2024-07-27 01:34:46.921855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.243 [2024-07-27 01:34:46.930908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.243 [2024-07-27 01:34:46.931335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.931549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.931578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.243 [2024-07-27 01:34:46.931610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.243 [2024-07-27 01:34:46.931795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.243 [2024-07-27 01:34:46.931948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.243 [2024-07-27 01:34:46.931973] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.243 [2024-07-27 01:34:46.931990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.243 [2024-07-27 01:34:46.934249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.243 [2024-07-27 01:34:46.943436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.243 [2024-07-27 01:34:46.943959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.944186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.944214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.243 [2024-07-27 01:34:46.944233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.243 [2024-07-27 01:34:46.944409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.243 [2024-07-27 01:34:46.944619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.243 [2024-07-27 01:34:46.944645] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.243 [2024-07-27 01:34:46.944664] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.243 [2024-07-27 01:34:46.946924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.243 [2024-07-27 01:34:46.956084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.243 [2024-07-27 01:34:46.956576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.956762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.243 [2024-07-27 01:34:46.956791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.244 [2024-07-27 01:34:46.956811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.244 [2024-07-27 01:34:46.956998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.244 [2024-07-27 01:34:46.957196] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.244 [2024-07-27 01:34:46.957223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.244 [2024-07-27 01:34:46.957241] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.244 [2024-07-27 01:34:46.959532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.244 [2024-07-27 01:34:46.968667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.244 [2024-07-27 01:34:46.969084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.244 [2024-07-27 01:34:46.969309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.244 [2024-07-27 01:34:46.969335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.244 [2024-07-27 01:34:46.969353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.244 [2024-07-27 01:34:46.969586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.244 [2024-07-27 01:34:46.969739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.244 [2024-07-27 01:34:46.969764] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.244 [2024-07-27 01:34:46.969781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.244 [2024-07-27 01:34:46.972179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.244 [2024-07-27 01:34:46.981175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.244 [2024-07-27 01:34:46.981542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.244 [2024-07-27 01:34:46.981784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.244 [2024-07-27 01:34:46.981813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.244 [2024-07-27 01:34:46.981832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.244 [2024-07-27 01:34:46.982017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.244 [2024-07-27 01:34:46.982214] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.244 [2024-07-27 01:34:46.982237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.244 [2024-07-27 01:34:46.982251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.244 [2024-07-27 01:34:46.984570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.244 [2024-07-27 01:34:46.993553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.244 [2024-07-27 01:34:46.993630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:55.244 [2024-07-27 01:34:46.993784] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.244 [2024-07-27 01:34:46.993804] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.244 [2024-07-27 01:34:46.993818] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.244 [2024-07-27 01:34:46.993950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.244 [2024-07-27 01:34:46.993901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.244 [2024-07-27 01:34:46.993966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.244 [2024-07-27 01:34:46.993969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.244 [2024-07-27 01:34:46.994139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.244 [2024-07-27 01:34:46.994165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.244 [2024-07-27 01:34:46.994181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.244 [2024-07-27 01:34:46.994314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.244 [2024-07-27 01:34:46.994446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.244 [2024-07-27 01:34:46.994482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.244 [2024-07-27 01:34:46.994497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.244 [2024-07-27 01:34:46.996640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.505 [2024-07-27 01:34:47.005947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.505 [2024-07-27 01:34:47.006472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-27 01:34:47.006682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-27 01:34:47.006709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.505 [2024-07-27 01:34:47.006730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.505 [2024-07-27 01:34:47.006910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.505 [2024-07-27 01:34:47.007134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.505 [2024-07-27 01:34:47.007158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.505 [2024-07-27 01:34:47.007177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.505 [2024-07-27 01:34:47.009281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.505 [2024-07-27 01:34:47.018505] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.505 [2024-07-27 01:34:47.019162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-27 01:34:47.019343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.505 [2024-07-27 01:34:47.019381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.019402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.019619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.506 [2024-07-27 01:34:47.019757] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.506 [2024-07-27 01:34:47.019780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.506 [2024-07-27 01:34:47.019799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.506 [2024-07-27 01:34:47.021908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.506 [2024-07-27 01:34:47.030723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.506 [2024-07-27 01:34:47.031242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.031445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.031472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.031494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.031718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.506 [2024-07-27 01:34:47.031870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.506 [2024-07-27 01:34:47.031893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.506 [2024-07-27 01:34:47.031913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.506 [2024-07-27 01:34:47.033996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.506 [2024-07-27 01:34:47.043087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.506 [2024-07-27 01:34:47.043619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.043823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.043851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.043874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.044051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.506 [2024-07-27 01:34:47.044204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.506 [2024-07-27 01:34:47.044228] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.506 [2024-07-27 01:34:47.044247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.506 [2024-07-27 01:34:47.046208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.506 [2024-07-27 01:34:47.055613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.506 [2024-07-27 01:34:47.056030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.056234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.056263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.056282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.056421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.506 [2024-07-27 01:34:47.056571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.506 [2024-07-27 01:34:47.056604] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.506 [2024-07-27 01:34:47.056621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.506 [2024-07-27 01:34:47.058654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.506 [2024-07-27 01:34:47.068025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.506 [2024-07-27 01:34:47.068638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.068866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.068893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.068915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.069084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.506 [2024-07-27 01:34:47.069259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.506 [2024-07-27 01:34:47.069284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.506 [2024-07-27 01:34:47.069304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.506 [2024-07-27 01:34:47.071225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.506 [2024-07-27 01:34:47.080494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.506 [2024-07-27 01:34:47.080900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.081096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.081125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.081144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.081317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.506 [2024-07-27 01:34:47.081467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.506 [2024-07-27 01:34:47.081490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.506 [2024-07-27 01:34:47.081506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.506 [2024-07-27 01:34:47.083580] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.506 [2024-07-27 01:34:47.092603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.506 [2024-07-27 01:34:47.093018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.093226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.093253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.093270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.093418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.506 [2024-07-27 01:34:47.093562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.506 [2024-07-27 01:34:47.093584] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.506 [2024-07-27 01:34:47.093607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.506 [2024-07-27 01:34:47.095719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.506 [2024-07-27 01:34:47.104994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.506 [2024-07-27 01:34:47.105330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.105511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.105537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.105553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.105774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.506 [2024-07-27 01:34:47.105917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.506 [2024-07-27 01:34:47.105939] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.506 [2024-07-27 01:34:47.105954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.506 [2024-07-27 01:34:47.107817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.506 [2024-07-27 01:34:47.117127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.506 [2024-07-27 01:34:47.117448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.117612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.117639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.117655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.117866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.506 [2024-07-27 01:34:47.118011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.506 [2024-07-27 01:34:47.118033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.506 [2024-07-27 01:34:47.118073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.506 [2024-07-27 01:34:47.120153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.506 [2024-07-27 01:34:47.129280] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.506 [2024-07-27 01:34:47.129642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.129851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.506 [2024-07-27 01:34:47.129878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.506 [2024-07-27 01:34:47.129895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.506 [2024-07-27 01:34:47.130012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.130201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.130225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.130240] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.507 [2024-07-27 01:34:47.132293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.507 [2024-07-27 01:34:47.141625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.507 [2024-07-27 01:34:47.142030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.142220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.142246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.507 [2024-07-27 01:34:47.142263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.507 [2024-07-27 01:34:47.142443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.142603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.142625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.142640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.507 [2024-07-27 01:34:47.144630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.507 [2024-07-27 01:34:47.154046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.507 [2024-07-27 01:34:47.154357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.154535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.154562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.507 [2024-07-27 01:34:47.154579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.507 [2024-07-27 01:34:47.154774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.154918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.154940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.154955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.507 [2024-07-27 01:34:47.157130] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.507 [2024-07-27 01:34:47.166127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.507 [2024-07-27 01:34:47.166444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.166628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.166654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.507 [2024-07-27 01:34:47.166669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.507 [2024-07-27 01:34:47.166786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.166979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.167000] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.167014] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.507 [2024-07-27 01:34:47.169106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.507 [2024-07-27 01:34:47.178546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.507 [2024-07-27 01:34:47.178877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.179089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.179118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.507 [2024-07-27 01:34:47.179134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.507 [2024-07-27 01:34:47.179333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.179511] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.179533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.179548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.507 [2024-07-27 01:34:47.181705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.507 [2024-07-27 01:34:47.191035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.507 [2024-07-27 01:34:47.191386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.191565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.191592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.507 [2024-07-27 01:34:47.191609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.507 [2024-07-27 01:34:47.191757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.191915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.191936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.191950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.507 [2024-07-27 01:34:47.194098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.507 [2024-07-27 01:34:47.203384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.507 [2024-07-27 01:34:47.203803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.203985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.204010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.507 [2024-07-27 01:34:47.204026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.507 [2024-07-27 01:34:47.204200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.204320] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.204342] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.204371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.507 [2024-07-27 01:34:47.206326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.507 [2024-07-27 01:34:47.215630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.507 [2024-07-27 01:34:47.216035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.216222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.216249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.507 [2024-07-27 01:34:47.216266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.507 [2024-07-27 01:34:47.216463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.216607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.216628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.216643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.507 [2024-07-27 01:34:47.218735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.507 [2024-07-27 01:34:47.228022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.507 [2024-07-27 01:34:47.228411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.228568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.228594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.507 [2024-07-27 01:34:47.228610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.507 [2024-07-27 01:34:47.228774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.228950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.228971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.228986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.507 [2024-07-27 01:34:47.230867] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.507 [2024-07-27 01:34:47.240281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.507 [2024-07-27 01:34:47.240631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.240835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.507 [2024-07-27 01:34:47.240860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.507 [2024-07-27 01:34:47.240877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.507 [2024-07-27 01:34:47.240994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.507 [2024-07-27 01:34:47.241195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.507 [2024-07-27 01:34:47.241216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.507 [2024-07-27 01:34:47.241231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.508 [2024-07-27 01:34:47.243428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.508 [2024-07-27 01:34:47.252459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.508 [2024-07-27 01:34:47.252825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-27 01:34:47.253006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.508 [2024-07-27 01:34:47.253039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.508 [2024-07-27 01:34:47.253057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.508 [2024-07-27 01:34:47.253200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.508 [2024-07-27 01:34:47.253384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.508 [2024-07-27 01:34:47.253406] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.508 [2024-07-27 01:34:47.253421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.508 [2024-07-27 01:34:47.255436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.767 [2024-07-27 01:34:47.264811] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.767 [2024-07-27 01:34:47.265191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.767 [2024-07-27 01:34:47.265376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.265402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.265419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.768 [2024-07-27 01:34:47.265536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.768 [2024-07-27 01:34:47.265729] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.768 [2024-07-27 01:34:47.265751] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.768 [2024-07-27 01:34:47.265766] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.768 [2024-07-27 01:34:47.268026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.768 [2024-07-27 01:34:47.277149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.768 [2024-07-27 01:34:47.277515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.277719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.277744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.277761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.768 [2024-07-27 01:34:47.277925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.768 [2024-07-27 01:34:47.278144] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.768 [2024-07-27 01:34:47.278167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.768 [2024-07-27 01:34:47.278183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.768 [2024-07-27 01:34:47.280155] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.768 [2024-07-27 01:34:47.289443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.768 [2024-07-27 01:34:47.289800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.289953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.289978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.290000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.768 [2024-07-27 01:34:47.290144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.768 [2024-07-27 01:34:47.290307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.768 [2024-07-27 01:34:47.290329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.768 [2024-07-27 01:34:47.290343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.768 [2024-07-27 01:34:47.292343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.768 [2024-07-27 01:34:47.301656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.768 [2024-07-27 01:34:47.301963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.302133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.302161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.302178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.768 [2024-07-27 01:34:47.302326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.768 [2024-07-27 01:34:47.302515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.768 [2024-07-27 01:34:47.302538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.768 [2024-07-27 01:34:47.302552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.768 [2024-07-27 01:34:47.304598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.768 [2024-07-27 01:34:47.313742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.768 [2024-07-27 01:34:47.314131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.314311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.314338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.314354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.768 [2024-07-27 01:34:47.314552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.768 [2024-07-27 01:34:47.314711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.768 [2024-07-27 01:34:47.314733] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.768 [2024-07-27 01:34:47.314748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.768 [2024-07-27 01:34:47.316900] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.768 [2024-07-27 01:34:47.326131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.768 [2024-07-27 01:34:47.326447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.326660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.326687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.326704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.768 [2024-07-27 01:34:47.326844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.768 [2024-07-27 01:34:47.327037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.768 [2024-07-27 01:34:47.327082] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.768 [2024-07-27 01:34:47.327098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.768 [2024-07-27 01:34:47.329153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.768 [2024-07-27 01:34:47.338350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.768 [2024-07-27 01:34:47.338806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.339005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.339031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.339047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.768 [2024-07-27 01:34:47.339220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.768 [2024-07-27 01:34:47.339367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.768 [2024-07-27 01:34:47.339388] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.768 [2024-07-27 01:34:47.339402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.768 [2024-07-27 01:34:47.341381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.768 [2024-07-27 01:34:47.350395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.768 [2024-07-27 01:34:47.350741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.350900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.350925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.350941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.768 [2024-07-27 01:34:47.351116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.768 [2024-07-27 01:34:47.351267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.768 [2024-07-27 01:34:47.351288] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.768 [2024-07-27 01:34:47.351302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.768 [2024-07-27 01:34:47.353426] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.768 [2024-07-27 01:34:47.362644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.768 [2024-07-27 01:34:47.363009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.363185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.363212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.363228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.768 [2024-07-27 01:34:47.363409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.768 [2024-07-27 01:34:47.363606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.768 [2024-07-27 01:34:47.363629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.768 [2024-07-27 01:34:47.363643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.768 [2024-07-27 01:34:47.365609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.768 [2024-07-27 01:34:47.374931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.768 [2024-07-27 01:34:47.375291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.375472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.768 [2024-07-27 01:34:47.375500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.768 [2024-07-27 01:34:47.375517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.375681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.375872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.375895] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.769 [2024-07-27 01:34:47.375909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.769 [2024-07-27 01:34:47.377882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.769 [2024-07-27 01:34:47.387340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.769 [2024-07-27 01:34:47.387688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.387892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.387918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.769 [2024-07-27 01:34:47.387935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.388077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.388274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.388297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.769 [2024-07-27 01:34:47.388312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.769 [2024-07-27 01:34:47.390496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.769 [2024-07-27 01:34:47.399497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.769 [2024-07-27 01:34:47.399860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.400066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.400092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.769 [2024-07-27 01:34:47.400108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.400258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.400422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.400448] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.769 [2024-07-27 01:34:47.400463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.769 [2024-07-27 01:34:47.402614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.769 [2024-07-27 01:34:47.411747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.769 [2024-07-27 01:34:47.412078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.412241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.412266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.769 [2024-07-27 01:34:47.412283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.412459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.412617] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.412639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.769 [2024-07-27 01:34:47.412653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.769 [2024-07-27 01:34:47.414731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.769 [2024-07-27 01:34:47.423983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.769 [2024-07-27 01:34:47.424357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.424537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.424564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.769 [2024-07-27 01:34:47.424581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.424698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.424858] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.424879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.769 [2024-07-27 01:34:47.424893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.769 [2024-07-27 01:34:47.426894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.769 [2024-07-27 01:34:47.436236] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.769 [2024-07-27 01:34:47.436621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.436793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.436818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.769 [2024-07-27 01:34:47.436835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.436984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.437139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.437161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.769 [2024-07-27 01:34:47.437180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.769 [2024-07-27 01:34:47.439212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.769 [2024-07-27 01:34:47.448577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.769 [2024-07-27 01:34:47.448996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.449174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.449200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.769 [2024-07-27 01:34:47.449216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.449429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.449618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.449641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.769 [2024-07-27 01:34:47.449655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.769 [2024-07-27 01:34:47.451734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.769 [2024-07-27 01:34:47.460806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.769 [2024-07-27 01:34:47.461109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.461258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.461285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.769 [2024-07-27 01:34:47.461302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.461467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.461643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.461665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.769 [2024-07-27 01:34:47.461679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.769 [2024-07-27 01:34:47.463771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.769 [2024-07-27 01:34:47.473153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.769 [2024-07-27 01:34:47.473449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.473654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.473681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.769 [2024-07-27 01:34:47.473698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.473895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.474093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.474117] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.769 [2024-07-27 01:34:47.474132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.769 [2024-07-27 01:34:47.475994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.769 [2024-07-27 01:34:47.485402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.769 [2024-07-27 01:34:47.485735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.485913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.769 [2024-07-27 01:34:47.485938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.769 [2024-07-27 01:34:47.485953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.769 [2024-07-27 01:34:47.486111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.769 [2024-07-27 01:34:47.486259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.769 [2024-07-27 01:34:47.486280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.770 [2024-07-27 01:34:47.486294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.770 [2024-07-27 01:34:47.488384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.770 [2024-07-27 01:34:47.497864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.770 [2024-07-27 01:34:47.498196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.770 [2024-07-27 01:34:47.498346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.770 [2024-07-27 01:34:47.498372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.770 [2024-07-27 01:34:47.498389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.770 [2024-07-27 01:34:47.498556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.770 [2024-07-27 01:34:47.498738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.770 [2024-07-27 01:34:47.498761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.770 [2024-07-27 01:34:47.498775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.770 [2024-07-27 01:34:47.500795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.770 [2024-07-27 01:34:47.510284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.770 [2024-07-27 01:34:47.510714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.770 [2024-07-27 01:34:47.510881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.770 [2024-07-27 01:34:47.510908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.770 [2024-07-27 01:34:47.510924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.770 [2024-07-27 01:34:47.511040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.770 [2024-07-27 01:34:47.511188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.770 [2024-07-27 01:34:47.511211] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.770 [2024-07-27 01:34:47.511226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.770 [2024-07-27 01:34:47.513327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.770 [2024-07-27 01:34:47.522893] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.770 [2024-07-27 01:34:47.523267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.770 [2024-07-27 01:34:47.523443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.770 [2024-07-27 01:34:47.523469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:55.770 [2024-07-27 01:34:47.523486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:55.770 [2024-07-27 01:34:47.523618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:55.770 [2024-07-27 01:34:47.523770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.770 [2024-07-27 01:34:47.523792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.770 [2024-07-27 01:34:47.523806] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.030 [2024-07-27 01:34:47.525973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.030 [2024-07-27 01:34:47.535026] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.030 [2024-07-27 01:34:47.535346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.030 [2024-07-27 01:34:47.535536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.030 [2024-07-27 01:34:47.535563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.030 [2024-07-27 01:34:47.535579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.030 [2024-07-27 01:34:47.535713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.030 [2024-07-27 01:34:47.535909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.030 [2024-07-27 01:34:47.535931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.030 [2024-07-27 01:34:47.535946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.030 [2024-07-27 01:34:47.538187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.030 [2024-07-27 01:34:47.547258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.030 [2024-07-27 01:34:47.547602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.030 [2024-07-27 01:34:47.547779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.547805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.547821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.031 [2024-07-27 01:34:47.547935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.031 [2024-07-27 01:34:47.548110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.031 [2024-07-27 01:34:47.548133] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.031 [2024-07-27 01:34:47.548148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.031 [2024-07-27 01:34:47.550276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.031 [2024-07-27 01:34:47.559619] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.031 [2024-07-27 01:34:47.560047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.560211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.560237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.560254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.031 [2024-07-27 01:34:47.560465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.031 [2024-07-27 01:34:47.560639] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.031 [2024-07-27 01:34:47.560660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.031 [2024-07-27 01:34:47.560674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.031 [2024-07-27 01:34:47.562587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.031 [2024-07-27 01:34:47.571803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.031 [2024-07-27 01:34:47.572223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.572378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.572406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.572423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.031 [2024-07-27 01:34:47.572587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.031 [2024-07-27 01:34:47.572731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.031 [2024-07-27 01:34:47.572753] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.031 [2024-07-27 01:34:47.572766] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.031 [2024-07-27 01:34:47.574905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.031 [2024-07-27 01:34:47.583993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.031 [2024-07-27 01:34:47.584347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.584525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.584552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.584569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.031 [2024-07-27 01:34:47.584765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.031 [2024-07-27 01:34:47.584955] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.031 [2024-07-27 01:34:47.584977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.031 [2024-07-27 01:34:47.584991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.031 [2024-07-27 01:34:47.587337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.031 [2024-07-27 01:34:47.596181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.031 [2024-07-27 01:34:47.596476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.596654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.596686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.596703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.031 [2024-07-27 01:34:47.596820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.031 [2024-07-27 01:34:47.597018] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.031 [2024-07-27 01:34:47.597054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.031 [2024-07-27 01:34:47.597077] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.031 [2024-07-27 01:34:47.599125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.031 [2024-07-27 01:34:47.608469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.031 [2024-07-27 01:34:47.608814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.608951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.608978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.608995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.031 [2024-07-27 01:34:47.609135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.031 [2024-07-27 01:34:47.609305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.031 [2024-07-27 01:34:47.609327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.031 [2024-07-27 01:34:47.609342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.031 [2024-07-27 01:34:47.611479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.031 [2024-07-27 01:34:47.620690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.031 [2024-07-27 01:34:47.621074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.621243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.621270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.621287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.031 [2024-07-27 01:34:47.621418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.031 [2024-07-27 01:34:47.621566] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.031 [2024-07-27 01:34:47.621588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.031 [2024-07-27 01:34:47.621602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.031 [2024-07-27 01:34:47.623670] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.031 [2024-07-27 01:34:47.633081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.031 [2024-07-27 01:34:47.633380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.633529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.633557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.633579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.031 [2024-07-27 01:34:47.633775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.031 [2024-07-27 01:34:47.633924] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.031 [2024-07-27 01:34:47.633946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.031 [2024-07-27 01:34:47.633961] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.031 [2024-07-27 01:34:47.636354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.031 [2024-07-27 01:34:47.645476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.031 [2024-07-27 01:34:47.645821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.646000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.646027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.646044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.031 [2024-07-27 01:34:47.646234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.031 [2024-07-27 01:34:47.646447] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.031 [2024-07-27 01:34:47.646469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.031 [2024-07-27 01:34:47.646484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.031 [2024-07-27 01:34:47.648599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.031 [2024-07-27 01:34:47.657638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.031 [2024-07-27 01:34:47.658007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.658173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.031 [2024-07-27 01:34:47.658201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.031 [2024-07-27 01:34:47.658218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.658428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.658541] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.032 [2024-07-27 01:34:47.658562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.032 [2024-07-27 01:34:47.658577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.032 [2024-07-27 01:34:47.660723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.032 [2024-07-27 01:34:47.670003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.032 [2024-07-27 01:34:47.670325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.670560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.670587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.032 [2024-07-27 01:34:47.670604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.670756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.670931] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.032 [2024-07-27 01:34:47.670953] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.032 [2024-07-27 01:34:47.670968] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.032 [2024-07-27 01:34:47.673154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.032 [2024-07-27 01:34:47.682285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.032 [2024-07-27 01:34:47.682631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.682776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.682803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.032 [2024-07-27 01:34:47.682820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.683000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.683209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.032 [2024-07-27 01:34:47.683233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.032 [2024-07-27 01:34:47.683248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.032 [2024-07-27 01:34:47.685279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.032 [2024-07-27 01:34:47.694539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.032 [2024-07-27 01:34:47.694866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.695035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.695070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.032 [2024-07-27 01:34:47.695089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.695287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.695480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.032 [2024-07-27 01:34:47.695501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.032 [2024-07-27 01:34:47.695515] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.032 [2024-07-27 01:34:47.697546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.032 [2024-07-27 01:34:47.706669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.032 [2024-07-27 01:34:47.706924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.707100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.707128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.032 [2024-07-27 01:34:47.707145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.707310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.707462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.032 [2024-07-27 01:34:47.707484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.032 [2024-07-27 01:34:47.707498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.032 [2024-07-27 01:34:47.709562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.032 [2024-07-27 01:34:47.719154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.032 [2024-07-27 01:34:47.719527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.719674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.719700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.032 [2024-07-27 01:34:47.719716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.719833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.720010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.032 [2024-07-27 01:34:47.720032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.032 [2024-07-27 01:34:47.720046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.032 [2024-07-27 01:34:47.722016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.032 [2024-07-27 01:34:47.731393] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.032 [2024-07-27 01:34:47.731785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.731990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.732016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.032 [2024-07-27 01:34:47.732033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.732206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.732402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.032 [2024-07-27 01:34:47.732424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.032 [2024-07-27 01:34:47.732437] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.032 [2024-07-27 01:34:47.734482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.032 [2024-07-27 01:34:47.743506] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.032 [2024-07-27 01:34:47.743866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.744073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.744100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.032 [2024-07-27 01:34:47.744117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.744267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.744433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.032 [2024-07-27 01:34:47.744458] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.032 [2024-07-27 01:34:47.744473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.032 [2024-07-27 01:34:47.746461] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.032 [2024-07-27 01:34:47.755736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.032 [2024-07-27 01:34:47.756116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.756319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.756345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.032 [2024-07-27 01:34:47.756361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.756494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.756656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.032 [2024-07-27 01:34:47.756677] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.032 [2024-07-27 01:34:47.756691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.032 [2024-07-27 01:34:47.758814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.032 [2024-07-27 01:34:47.768239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.032 [2024-07-27 01:34:47.768598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.768748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.032 [2024-07-27 01:34:47.768776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.032 [2024-07-27 01:34:47.768794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.032 [2024-07-27 01:34:47.768959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.032 [2024-07-27 01:34:47.769168] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.033 [2024-07-27 01:34:47.769192] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.033 [2024-07-27 01:34:47.769207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.033 [2024-07-27 01:34:47.771144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.033 01:34:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:56.033 01:34:47 -- common/autotest_common.sh@852 -- # return 0 00:26:56.033 01:34:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:56.033 01:34:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:56.033 01:34:47 -- common/autotest_common.sh@10 -- # set +x 00:26:56.033 [2024-07-27 01:34:47.780522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.033 [2024-07-27 01:34:47.780878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.033 [2024-07-27 01:34:47.781021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.033 [2024-07-27 01:34:47.781047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.033 [2024-07-27 01:34:47.781071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.033 [2024-07-27 01:34:47.781237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.033 [2024-07-27 01:34:47.781409] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.033 [2024-07-27 01:34:47.781430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.033 [2024-07-27 01:34:47.781445] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.033 [2024-07-27 01:34:47.783530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.291 [2024-07-27 01:34:47.792900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.291 [2024-07-27 01:34:47.793231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.291 [2024-07-27 01:34:47.793417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.291 [2024-07-27 01:34:47.793445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.291 [2024-07-27 01:34:47.793463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.291 [2024-07-27 01:34:47.793613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.291 [2024-07-27 01:34:47.793775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.291 [2024-07-27 01:34:47.793796] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.291 [2024-07-27 01:34:47.793818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.291 [2024-07-27 01:34:47.796172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.291 01:34:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.292 01:34:47 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.292 01:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.292 01:34:47 -- common/autotest_common.sh@10 -- # set +x 00:26:56.292 [2024-07-27 01:34:47.804686] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.292 [2024-07-27 01:34:47.805182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.292 [2024-07-27 01:34:47.805595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.805765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.805791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.292 [2024-07-27 01:34:47.805808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.292 [2024-07-27 01:34:47.805972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.292 [2024-07-27 01:34:47.806125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.292 [2024-07-27 01:34:47.806148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.292 [2024-07-27 01:34:47.806162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.292 [2024-07-27 01:34:47.808132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.292 01:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.292 01:34:47 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:56.292 01:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.292 01:34:47 -- common/autotest_common.sh@10 -- # set +x 00:26:56.292 [2024-07-27 01:34:47.817398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.292 [2024-07-27 01:34:47.817722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.817875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.817916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.292 [2024-07-27 01:34:47.817933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.292 [2024-07-27 01:34:47.818139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.292 [2024-07-27 01:34:47.818308] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.292 [2024-07-27 01:34:47.818333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.292 [2024-07-27 01:34:47.818362] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.292 [2024-07-27 01:34:47.820323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.292 [2024-07-27 01:34:47.829871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.292 [2024-07-27 01:34:47.830433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.830607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.830633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.292 [2024-07-27 01:34:47.830653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.292 [2024-07-27 01:34:47.830812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.292 [2024-07-27 01:34:47.831025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.292 [2024-07-27 01:34:47.831069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.292 [2024-07-27 01:34:47.831089] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.292 [2024-07-27 01:34:47.833340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.292 Malloc0 00:26:56.292 01:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.292 01:34:47 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.292 01:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.292 01:34:47 -- common/autotest_common.sh@10 -- # set +x 00:26:56.292 [2024-07-27 01:34:47.842405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.292 [2024-07-27 01:34:47.842817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.842980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.843006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.292 [2024-07-27 01:34:47.843026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.292 [2024-07-27 01:34:47.843188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.292 [2024-07-27 01:34:47.843346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.292 [2024-07-27 01:34:47.843368] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.292 [2024-07-27 01:34:47.843400] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.292 [2024-07-27 01:34:47.845626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.292 01:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.292 01:34:47 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:56.292 01:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.292 01:34:47 -- common/autotest_common.sh@10 -- # set +x 00:26:56.292 [2024-07-27 01:34:47.854888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.292 [2024-07-27 01:34:47.855232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.855391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.292 [2024-07-27 01:34:47.855417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a0400 with addr=10.0.0.2, port=4420 00:26:56.292 [2024-07-27 01:34:47.855434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a0400 is same with the state(5) to be set 00:26:56.292 [2024-07-27 01:34:47.855584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a0400 (9): Bad file descriptor 00:26:56.292 [2024-07-27 01:34:47.855760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.292 [2024-07-27 01:34:47.855781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.292 [2024-07-27 01:34:47.855795] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.292 01:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.292 01:34:47 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.292 01:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.292 01:34:47 -- common/autotest_common.sh@10 -- # set +x 00:26:56.292 [2024-07-27 01:34:47.857930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.292 [2024-07-27 01:34:47.860857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.292 01:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.292 01:34:47 -- host/bdevperf.sh@38 -- # wait 750261 00:26:56.292 [2024-07-27 01:34:47.867272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.292 [2024-07-27 01:34:47.975798] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:06.278 00:27:06.278 Latency(us) 00:27:06.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.278 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:06.278 Verification LBA range: start 0x0 length 0x4000 00:27:06.278 Nvme1n1 : 15.01 8473.65 33.10 16004.55 0.00 5214.48 1292.52 21554.06 00:27:06.278 =================================================================================================================== 00:27:06.278 Total : 8473.65 33.10 16004.55 0.00 5214.48 1292.52 21554.06 00:27:06.278 01:34:56 -- host/bdevperf.sh@39 -- # sync 00:27:06.278 01:34:56 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.278 01:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:06.278 01:34:56 -- common/autotest_common.sh@10 -- # set +x 00:27:06.278 01:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:06.278 01:34:56 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:06.278 01:34:56 -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:06.278 01:34:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:06.278 01:34:56 -- nvmf/common.sh@116 -- # sync 00:27:06.278 01:34:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:06.278 01:34:56 -- nvmf/common.sh@119 -- # set +e 00:27:06.278 01:34:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:06.278 01:34:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:06.278 rmmod nvme_tcp 00:27:06.278 rmmod nvme_fabrics 00:27:06.278 rmmod nvme_keyring 00:27:06.278 01:34:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:06.278 01:34:56 -- nvmf/common.sh@123 -- # set -e 00:27:06.278 01:34:56 -- nvmf/common.sh@124 -- # return 0 00:27:06.278 01:34:56 -- nvmf/common.sh@477 -- # '[' -n 751064 ']' 00:27:06.278 01:34:56 -- nvmf/common.sh@478 -- # killprocess 751064 00:27:06.278 01:34:56 -- common/autotest_common.sh@926 -- # '[' -z 751064 ']' 00:27:06.278 01:34:56 -- common/autotest_common.sh@930 -- # kill -0 751064 00:27:06.278 01:34:56 -- common/autotest_common.sh@931 -- # uname 00:27:06.278 01:34:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:06.278 01:34:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 751064 00:27:06.278 01:34:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:06.278 01:34:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:06.278 01:34:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 751064' 00:27:06.278 killing process with pid 751064 00:27:06.278 01:34:56 -- common/autotest_common.sh@945 -- # kill 751064 00:27:06.278 01:34:56 -- common/autotest_common.sh@950 -- # wait 751064 00:27:06.278 01:34:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:06.278 01:34:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:06.278 01:34:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:06.278 01:34:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:06.279 01:34:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:06.279 01:34:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.279 01:34:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.279 01:34:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.215 01:34:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:07.215 00:27:07.215 real 0m23.158s 00:27:07.215 user 1m1.865s 00:27:07.215 sys 0m4.587s 00:27:07.215 01:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.215 01:34:58 -- common/autotest_common.sh@10 -- # set +x 00:27:07.215 ************************************ 00:27:07.215 END TEST nvmf_bdevperf 00:27:07.215 ************************************ 00:27:07.215 01:34:58 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:07.215 01:34:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:07.215 01:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:07.215 01:34:58 -- common/autotest_common.sh@10 -- # set +x 00:27:07.215 ************************************ 00:27:07.215 START TEST nvmf_target_disconnect 00:27:07.215 ************************************ 00:27:07.215 01:34:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:07.474 * Looking for test storage... 00:27:07.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.474 01:34:59 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.474 01:34:59 -- nvmf/common.sh@7 -- # uname -s 00:27:07.474 01:34:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.474 01:34:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.474 01:34:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.474 01:34:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.474 01:34:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.474 01:34:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.474 01:34:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.474 01:34:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.474 01:34:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.474 01:34:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.474 01:34:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.474 01:34:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.474 01:34:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.474 01:34:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.474 01:34:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.474 01:34:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.474 01:34:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.474 01:34:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.474 01:34:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.474 01:34:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.474 01:34:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.474 01:34:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.474 01:34:59 -- paths/export.sh@5 -- # export PATH 00:27:07.474 01:34:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.474 01:34:59 -- nvmf/common.sh@46 -- # : 0 00:27:07.474 01:34:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:07.474 01:34:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:07.474 01:34:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:07.474 01:34:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.474 01:34:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.474 01:34:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:07.474 01:34:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:07.474 01:34:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:07.474 01:34:59 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:07.474 01:34:59 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:07.474 01:34:59 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:07.474 01:34:59 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:27:07.474 01:34:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:07.474 01:34:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.474 01:34:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:07.474 01:34:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:07.474 01:34:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:07.474 01:34:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.474 01:34:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.474 01:34:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.474 01:34:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:07.474 01:34:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:07.474 01:34:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:07.474 01:34:59 -- common/autotest_common.sh@10 -- # set +x 00:27:09.376 01:35:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:09.376 01:35:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:09.376 01:35:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:09.376 01:35:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:09.376 01:35:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:09.376 01:35:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:09.376 01:35:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:09.376 01:35:00 -- nvmf/common.sh@294 -- # net_devs=() 00:27:09.376 01:35:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:09.376 01:35:00 -- nvmf/common.sh@295 -- # e810=() 00:27:09.376 01:35:00 -- nvmf/common.sh@295 -- # local -ga e810 00:27:09.376 01:35:00 -- nvmf/common.sh@296 -- # x722=() 00:27:09.376 01:35:00 -- nvmf/common.sh@296 -- # local -ga x722 00:27:09.376 01:35:00 -- nvmf/common.sh@297 -- # mlx=() 00:27:09.376 01:35:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:09.376 01:35:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.376 01:35:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:09.376 01:35:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:09.376 01:35:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:09.376 01:35:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:09.376 01:35:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:09.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:09.376 01:35:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:09.376 01:35:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:09.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:09.376 01:35:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:09.376 01:35:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:09.376 01:35:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.376 01:35:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:09.376 01:35:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.376 01:35:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:09.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:09.376 01:35:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.376 01:35:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:09.376 01:35:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.376 01:35:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:09.376 01:35:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.376 01:35:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:09.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:09.376 01:35:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.376 01:35:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:09.376 01:35:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:09.376 01:35:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:09.376 01:35:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:09.376 01:35:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.376 01:35:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.376 01:35:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.376 01:35:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:09.376 01:35:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.376 01:35:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.376 01:35:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:09.376 01:35:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.376 01:35:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.376 01:35:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:09.376 01:35:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:09.376 01:35:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.376 01:35:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.376 01:35:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.376 01:35:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.376 01:35:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:09.376 01:35:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.376 01:35:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.376 01:35:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.376 01:35:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:09.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:27:09.376 00:27:09.376 --- 10.0.0.2 ping statistics --- 00:27:09.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.376 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:09.376 01:35:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:27:09.376 00:27:09.376 --- 10.0.0.1 ping statistics --- 00:27:09.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.376 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:27:09.376 01:35:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.376 01:35:01 -- nvmf/common.sh@410 -- # return 0 00:27:09.376 01:35:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:09.376 01:35:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.376 01:35:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:09.376 01:35:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:09.376 01:35:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.376 01:35:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:09.376 01:35:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:09.376 01:35:01 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:09.376 01:35:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:09.376 01:35:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:09.376 01:35:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.376 ************************************ 00:27:09.376 START TEST nvmf_target_disconnect_tc1 00:27:09.376 ************************************ 00:27:09.376 01:35:01 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:27:09.376 01:35:01 -- host/target_disconnect.sh@32 -- # set +e 00:27:09.376 01:35:01 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:09.634 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.634 [2024-07-27 01:35:01.191864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-07-27 01:35:01.192111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-07-27 01:35:01.192155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc920 with addr=10.0.0.2, port=4420 00:27:09.634 [2024-07-27 01:35:01.192193] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:09.634 [2024-07-27 01:35:01.192217] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:09.634 [2024-07-27 01:35:01.192248] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:09.634 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:09.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:09.634 Initializing NVMe Controllers 00:27:09.634 01:35:01 -- host/target_disconnect.sh@33 -- # trap - ERR 00:27:09.634 01:35:01 -- host/target_disconnect.sh@33 -- # print_backtrace 00:27:09.634 01:35:01 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:27:09.634 01:35:01 -- common/autotest_common.sh@1132 -- # return 0 00:27:09.634 01:35:01 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:27:09.634 01:35:01 -- host/target_disconnect.sh@41 -- # set -e 00:27:09.634 00:27:09.634 real 0m0.088s 00:27:09.634 user 0m0.036s 00:27:09.634 sys 0m0.051s 00:27:09.634 01:35:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.634 01:35:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.634 ************************************ 00:27:09.634 END TEST nvmf_target_disconnect_tc1 00:27:09.634 ************************************ 00:27:09.634 01:35:01 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:09.634 01:35:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:09.634 01:35:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:09.634 01:35:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.634 ************************************ 00:27:09.634 START TEST nvmf_target_disconnect_tc2 00:27:09.634 ************************************ 00:27:09.634 01:35:01 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:27:09.634 01:35:01 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:27:09.634 01:35:01 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:09.634 01:35:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:09.634 01:35:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:09.634 01:35:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.634 01:35:01 -- nvmf/common.sh@469 -- # nvmfpid=754140 00:27:09.634 01:35:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:09.634 01:35:01 -- nvmf/common.sh@470 -- # waitforlisten 754140 00:27:09.634 01:35:01 -- common/autotest_common.sh@819 -- # '[' -z 754140 ']' 00:27:09.634 01:35:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.634 01:35:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:09.634 01:35:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.634 01:35:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:09.634 01:35:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.634 [2024-07-27 01:35:01.273875] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:09.634 [2024-07-27 01:35:01.273949] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.634 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.634 [2024-07-27 01:35:01.340794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.894 [2024-07-27 01:35:01.450426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:09.894 [2024-07-27 01:35:01.450574] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.894 [2024-07-27 01:35:01.450591] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.894 [2024-07-27 01:35:01.450603] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.894 [2024-07-27 01:35:01.450733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:09.894 [2024-07-27 01:35:01.450803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:09.894 [2024-07-27 01:35:01.450871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:09.894 [2024-07-27 01:35:01.450873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:10.831 01:35:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:10.831 01:35:02 -- common/autotest_common.sh@852 -- # return 0 00:27:10.831 01:35:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:10.831 01:35:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:10.831 01:35:02 -- common/autotest_common.sh@10 -- # set +x 00:27:10.831 01:35:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.831 01:35:02 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:10.831 01:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:10.831 01:35:02 -- common/autotest_common.sh@10 -- # set +x 00:27:10.831 Malloc0 00:27:10.831 01:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:10.831 01:35:02 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:10.831 01:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:10.831 01:35:02 -- common/autotest_common.sh@10 -- # set +x 00:27:10.831 [2024-07-27 01:35:02.313967] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.831 01:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:10.831 01:35:02 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.831 01:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:10.831 01:35:02 -- common/autotest_common.sh@10 -- # set +x 00:27:10.831 01:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:10.831 01:35:02 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:10.831 01:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:10.831 01:35:02 -- common/autotest_common.sh@10 -- # set +x 00:27:10.831 01:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:10.831 01:35:02 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.831 01:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:10.831 01:35:02 -- common/autotest_common.sh@10 -- # set +x 00:27:10.831 [2024-07-27 01:35:02.342266] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.831 01:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:10.831 01:35:02 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:10.831 01:35:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:10.831 01:35:02 -- common/autotest_common.sh@10 -- # set +x 00:27:10.831 01:35:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:10.831 01:35:02 -- host/target_disconnect.sh@50 -- # reconnectpid=754295 00:27:10.831 01:35:02 -- host/target_disconnect.sh@52 -- # sleep 2 00:27:10.831 01:35:02 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.831 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.761 01:35:04 -- host/target_disconnect.sh@53 -- # kill -9 754140 00:27:12.761 01:35:04 -- host/target_disconnect.sh@55 -- # sleep 2 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 [2024-07-27 01:35:04.366576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 [2024-07-27 01:35:04.366907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Write completed with error (sct=0, sc=8) 00:27:12.761 starting I/O failed 00:27:12.761 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 [2024-07-27 01:35:04.367276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Write completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 Read completed with error (sct=0, sc=8) 00:27:12.762 starting I/O failed 00:27:12.762 [2024-07-27 01:35:04.367611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.762 [2024-07-27 01:35:04.367860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.368077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.368106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.368262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.368438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.368478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.368690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.368892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.368917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.369106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.369248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.369273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.369454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.369634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.369661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.369870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.370046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.370081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.370239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.370396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.370422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.370593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.370738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.370763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.370962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.371116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.371142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.371289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.371475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.371500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.371726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.371955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.371983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.372159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.372303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.372329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.372578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.372777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.372823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.373018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.373205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.373232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.373394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.373565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.373605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.373825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.374041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.374072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.374221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.374367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.374392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.762 qpair failed and we were unable to recover it. 00:27:12.762 [2024-07-27 01:35:04.374542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.374712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.762 [2024-07-27 01:35:04.374739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.374929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.375090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.375116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.375260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.375441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.375466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.375642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.375860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.375889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.376081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.376250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.376276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.376426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.376644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.376673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.376891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.377093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.377120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.377265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.377437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.377461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.377633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.377878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.377904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.378075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.378254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.378280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.378459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.378641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.378667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.378876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.379046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.379083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.379246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.379421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.379462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.379644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.379844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.379869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.380087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.380239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.380264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.380465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.380757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.380782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.380972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.381180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.381208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.381386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.381602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.381631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.381815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.382015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.382042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.382209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.382361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.382387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.382562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.382718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.382760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.382968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.383156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.383182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.383325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.383587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.383611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.383885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.384056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.384086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.384260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.384449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.384473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.384646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.384816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.384841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.385020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.385170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.385197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.385369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.385561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.385589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.763 [2024-07-27 01:35:04.385803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.385994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.763 [2024-07-27 01:35:04.386023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.763 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.386230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.386387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.386412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.386580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.386746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.386771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.386923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.387102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.387128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.387273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.387412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.387438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.387579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.387746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.387772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.387952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.388129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.388157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.388310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.388508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.388534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.388737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.388879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.388904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.389097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.389291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.389328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.389530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.389706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.389749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.389974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.390143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.390169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.390326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.390569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.390622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.390883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.391029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.391054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.391217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.391393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.391436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.391601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.391796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.391821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.391996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.392145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.392171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.392343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.392511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.392536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.392708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.392941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.392967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.393145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.393289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.393314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.393530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.393698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.393724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.393889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.394090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.394115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.394270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.394491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.394519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.394724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.394910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.394938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.395143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.395348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.395374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.395549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.395688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.395713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.395910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.396116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.396142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.764 [2024-07-27 01:35:04.396304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.396597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.764 [2024-07-27 01:35:04.396622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.764 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.396807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.396983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.397009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.397220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.397376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.397401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.397590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.397764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.397789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.397970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.398169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.398196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.398356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.398582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.398611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.398778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.398996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.399025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.399213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.399361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.399386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.399616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.399948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.399973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.400169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.400373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.400398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.400622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.400816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.400844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.401042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.401267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.401296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.401484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.401688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.401721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.401941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.402130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.402157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.402353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.402549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.402574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.402753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.402962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.402990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.403182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.403375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.403417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.403603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.403774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.403813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.404048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.404268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.404297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.404506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.404760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.404784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.404987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.405180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.405211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.405429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.405631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.405656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.405822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.406031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.406056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.406241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.406423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.406450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.406731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.406914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.406943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.765 qpair failed and we were unable to recover it. 00:27:12.765 [2024-07-27 01:35:04.407127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.407327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.765 [2024-07-27 01:35:04.407352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.407537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.407711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.407736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.407939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.408142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.408169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.408339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.408555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.408607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.408812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.408987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.409012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.409183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.409356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.409382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.409599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.409782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.409810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.409970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.410185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.410214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.410444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.410586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.410616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.410827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.410981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.411006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.411190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.411344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.411372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.411603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.411775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.411800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.411951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.412148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.412174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.412372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.412568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.412593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.412813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.413003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.413034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.413218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.413385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.413411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.413580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.413760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.413785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.413997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.414148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.766 [2024-07-27 01:35:04.414190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.766 qpair failed and we were unable to recover it. 00:27:12.766 [2024-07-27 01:35:04.414406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.414579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.414604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.414805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.415003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.415033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.415264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.415431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.415460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.415642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.415844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.415869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.416051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.416276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.416305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.416525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.416822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.416871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.417064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.417284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.417310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.417483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.417626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.417651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.417822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.417974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.418000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.418229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.418445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.418496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.418697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.418894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.418919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.419118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.419333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.419359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.419509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.419687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.419713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.419914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.420050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.420081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.420256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.420444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.420471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.420691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.420894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.420919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.421092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.421318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.421346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.421544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.421747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.421774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.421919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.422071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.422097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.422248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.422397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.422422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.422598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.422807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.422833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.423007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.423162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.423189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.423337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.423526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.423554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.423750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.423950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.423978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.424212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.424390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.424415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.424620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.424804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.424859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.425076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.425264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.425292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.425488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.425669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.425697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.767 qpair failed and we were unable to recover it. 00:27:12.767 [2024-07-27 01:35:04.425885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.426114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.767 [2024-07-27 01:35:04.426142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.426362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.426552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.426581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.426774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.426955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.426983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.427153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.427302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.427332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.427483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.427736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.427791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.428015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.428218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.428244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.428426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.428598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.428626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.428815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.429028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.429056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.429257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.429446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.429473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.429665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.429857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.429886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.430101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.430292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.430320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.430520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.430777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.430828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.431029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.431205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.431231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.431373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.431542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.431567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.431745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.431946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.431974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.432171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.432317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.432343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.432532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.432739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.432765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.432957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.433176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.433205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.433414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.433635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.433663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.433847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.434029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.434057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.434256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.434450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.434479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.434655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.434823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.434848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.435013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.435245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.435274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.435461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.435609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.435635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.435813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.436008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.436033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.436201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.436397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.436426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.436617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.436808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.436836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.437028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.437211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.437236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.437409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.437611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.768 [2024-07-27 01:35:04.437636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.768 qpair failed and we were unable to recover it. 00:27:12.768 [2024-07-27 01:35:04.437827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.438014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.438042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.438248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.438393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.438436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.438655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.438933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.438961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.439194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.439351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.439376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.439579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.439756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.439781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.439959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.440138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.440165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.440320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.440457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.440483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.440654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.440831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.440857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.441069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.441244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.441286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.441508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.441730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.441758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.441956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.442134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.442160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.442336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.442506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.442531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.442722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.442913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.442941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.443136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.443289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.443314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.443497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.443756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.443808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.444024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.444202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.444231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.444401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.444613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.444646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.444861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.445035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.445066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.445268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.445482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.445510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.445698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.445841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.445888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.446095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.446282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.446308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.446507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.446651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.446676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.446856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.446988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.447014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.447192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.447366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.447392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.447569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.447909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.447960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.448159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.448307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.448337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.448510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.448677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.448718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.448955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.449110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.449138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.769 qpair failed and we were unable to recover it. 00:27:12.769 [2024-07-27 01:35:04.449319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.449541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.769 [2024-07-27 01:35:04.449569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.449790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.449961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.449989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.450211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.450386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.450412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.450613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.450773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.450820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.450981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.451204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.451231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.451414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.451567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.451593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.451793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.451949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.451978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.452204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.452352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.452378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.452614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.452981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.453044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.453245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.453427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.453468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.453685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.453889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.453915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.454095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.454315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.454343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.454515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.454711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.454736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.454900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.455115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.455141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.455337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.455519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.455547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.455744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.455918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.455944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.456128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.456300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.456326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.456562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.456826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.456879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.457083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.457248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.457274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.457468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.457621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.457662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.457863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.458052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.458085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.458280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.458450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.458476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.458647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.458835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.458861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.459087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.459295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.459323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.459516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.459681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.459707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.459884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.460048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.460080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.770 qpair failed and we were unable to recover it. 00:27:12.770 [2024-07-27 01:35:04.460278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.770 [2024-07-27 01:35:04.460432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.460475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.460665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.460840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.460866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.461017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.461213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.461242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.461400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.461591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.461621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.461843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.462013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.462039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.462226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.462423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.462449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.462595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.462769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.462811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.463003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.463191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.463221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.463443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.463723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.463751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.463955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.464148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.464178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.464379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.464563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.464588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.464786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.465006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.465032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.465247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.465422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.465449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.465653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.465976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.466032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.466231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.466429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.466455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.466681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.466918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.466944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.467144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.467323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.467365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.467532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.467747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.467775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.468008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.468184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.468210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.468394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.468724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.468786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.468953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.469169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.469198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.469377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.469595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.469623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.469825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.469998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.470028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.470183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.470354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.470380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.470567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.470718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.470746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.470966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.471188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.471218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.771 qpair failed and we were unable to recover it. 00:27:12.771 [2024-07-27 01:35:04.471386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.471608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.771 [2024-07-27 01:35:04.471633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.471803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.471978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.472003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.472184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.472379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.472469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.472665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.472887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.472913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.473139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.473299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.473328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.473526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.473776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.473826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.474009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.474198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.474224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.474428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.474696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.474724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.474900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.475070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.475111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.475280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.475465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.475493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.475648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.475839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.475867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.476056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.476237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.476263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.476408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.476572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.476600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.476785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.476996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.477021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.477207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.477381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.477407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.477605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.477917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.477975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.478167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.478378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.478406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.478601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.478789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.478817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.479038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.479272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.479301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.479488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.479687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.479713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.479909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.480073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.480102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.480299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.480517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.480545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.480729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.480915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.480947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.481156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.481369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.481397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.481584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.481794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.481822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.482022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.482238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.482265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.482480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.482650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.482676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.482880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.483093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.483119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.483361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.483549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.772 [2024-07-27 01:35:04.483577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.772 qpair failed and we were unable to recover it. 00:27:12.772 [2024-07-27 01:35:04.483733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.483896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.483921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.484144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.484299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.484327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.484493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.484682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.484713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.484935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.485128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.485157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.485374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.485519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.485561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.485754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.485935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.485963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.486122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.486276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.486301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.486489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.486750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.486800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.486991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.487187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.487221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.487447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.487618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.487643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.487861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.488074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.488103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.488267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.488448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.488476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.488668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.488843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.488868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.489043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.489235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.489264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.489456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.489753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.489805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.490025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.490215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.490243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.490435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.490600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.490628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.490818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.490985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.491026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.491229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.491403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.491428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.491629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.491817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.491846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.492154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.492381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.492411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.492600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.492783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.492811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.493033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.493254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.493283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.493512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.493809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.493859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.494079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.494235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.494261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.494432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.494696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.494747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.494927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.495098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.495125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.495331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.495590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.495619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.773 qpair failed and we were unable to recover it. 00:27:12.773 [2024-07-27 01:35:04.495806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.773 [2024-07-27 01:35:04.495967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.495995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.496224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.496411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.496458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.496633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.496824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.496864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.497068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.497214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.497240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.497426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.497622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.497650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.498035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.498197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.498225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.498385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.498541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.498569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.498754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.498942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.498970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.499135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.499312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.499338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.499566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.499753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.499782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.499952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.500104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.500133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.500343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.500537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.500563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.500801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.501008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.501034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.501238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.501424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.501494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.501678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.501885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.501940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.502135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.502328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.502357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.502512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.502685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.502711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.502973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.503177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.503204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.503351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.503505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.503530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.503767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.503921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.503949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.504142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.504332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.504360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.504578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.504736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.504764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.504986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.505172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.505201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.505397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.505543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.505588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.505771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.505968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.505994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.506172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.506408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.506459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.506671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.506819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.506844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.507046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.507220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.507249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.507463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.507745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.774 [2024-07-27 01:35:04.507772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.774 qpair failed and we were unable to recover it. 00:27:12.774 [2024-07-27 01:35:04.507994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.508140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.508167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.508348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.508537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.508566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.508738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.508958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.508991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.509169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.509344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.509370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.509578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.509795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.509820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.509989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.510193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.510219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.510396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.510572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.510598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.510761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.510958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.510984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.511187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.511482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.511534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.511723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.511939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.511967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.512189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.512403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.512432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.512604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.512797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.512825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.513003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.513202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.513231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.513398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.513633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.513688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.513908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.514101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.514130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:12.775 [2024-07-27 01:35:04.514333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.514515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.775 [2024-07-27 01:35:04.514540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:12.775 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-27 01:35:04.514719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-27 01:35:04.514946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-27 01:35:04.514974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-27 01:35:04.515195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-27 01:35:04.515353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-27 01:35:04.515382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.045 [2024-07-27 01:35:04.515604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-27 01:35:04.515775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.045 [2024-07-27 01:35:04.515801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.045 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.516016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.516205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.516231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.516398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.516660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.516710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.516907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.517100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.517131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.517349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.517559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.517585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.517758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.517953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.517980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.518175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.518343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.518384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.518569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.518884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.518936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.519102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.519321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.519349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.519540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.519798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.519857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.520054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.520227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.520267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.520460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.520707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.520754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.520940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.521134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.521163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.521386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.521569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.521597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.521773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.521921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.521947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.522118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.522348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.522399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.522622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.522849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.522874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.523030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.523228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.523254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.523402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.523573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.523598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.523798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.524009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.524035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.524188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.524408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.524437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.524631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.524797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.524825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.525040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.525222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.525248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.525446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.525755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.525817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.526012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.526159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.526186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.526406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.526791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.526852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.527073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.527291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.527316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.527494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.527697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.527726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.527884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.528107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.046 [2024-07-27 01:35:04.528133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.046 qpair failed and we were unable to recover it. 00:27:13.046 [2024-07-27 01:35:04.528327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.528630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.528691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.528895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.529113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.529142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.529342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.529537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.529562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.529725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.529942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.529970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.530155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.530348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.530376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.530567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.530736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.530766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.530952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.531148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.531183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.531378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.531513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.531538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.531770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.531926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.531954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.532175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.532367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.532421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.532640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.532778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.532803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.533020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.533255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.533284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.533479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.533786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.533837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.534027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.534231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.534260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.534455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.534638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.534699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.534919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.535164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.535193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.535393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.535681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.535731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.535930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.536178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.536207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.536424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.536668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.536729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.536930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.537149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.537178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.537367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.537630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.537682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.537872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.538090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.538119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.538343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.538650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.538716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.538910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.539114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.539140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.539346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.539643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.539695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.539891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.540110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.540139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.540343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.540568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.540596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.540795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.541013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.541041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.047 qpair failed and we were unable to recover it. 00:27:13.047 [2024-07-27 01:35:04.541262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.047 [2024-07-27 01:35:04.541483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.541511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.541690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.541904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.541932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.542130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.542307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.542333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.542510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.542824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.542884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.543076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.543238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.543266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.543456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.543796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.543853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.544021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.544177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.544202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.544415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.544616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.544666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.544889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.545068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.545094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.545247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.545471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.545499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.545719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.545953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.546009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.546244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.546421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.546447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.546648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.546962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.547015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.547212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.547448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.547507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.547697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.548021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.548079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.548278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.548495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.548523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.548718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.548908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.548936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.549124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.549367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.549426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.549626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.549887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.549937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.550134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.550330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.550362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.550585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.550797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.550823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.550975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.551170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.551196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.551388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.551610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.551660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.551878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.552038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.552072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.552261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.552465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.552522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.552746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.552960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.552988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.553214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.553439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.553494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.553690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.553882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.553910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.048 [2024-07-27 01:35:04.554097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.554282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.048 [2024-07-27 01:35:04.554310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.048 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.554457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.554682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.554707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.554882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.555027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.555054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.555283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.555610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.555661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.555863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.556055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.556088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.556287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.556440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.556466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.556641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.556969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.557020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.557225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.557403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.557457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.557660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.557839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.557891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.558108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.558275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.558303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.558491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.558774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.558823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.559033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.559252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.559278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.559480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.559675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.559716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.559923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.560074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.560116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.560303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.560547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.560590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.560829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.561040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.561069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.561281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.561480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.561507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.561712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.561944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.561971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.562146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.562378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.562448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.562662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.562951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.563015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.563233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.563412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.563451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.563671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.563866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.563892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.564085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.564261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.564286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.564483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.564686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.564710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.564935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.565154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.565183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.565401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.565590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.565618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.565840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.566034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.566068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.566269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.566574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.566623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.566811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.567000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.567028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.049 qpair failed and we were unable to recover it. 00:27:13.049 [2024-07-27 01:35:04.567223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.049 [2024-07-27 01:35:04.567404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.567429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.567572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.567748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.567773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.567932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.568123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.568152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.568352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.568592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.568617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.568784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.568974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.569002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.569207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.569387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.569413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.569606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.569889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.569938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.570136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.570328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.570356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.570503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.570847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.570888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.571098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.571318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.571346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.571609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.571806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.571834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.572053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.572282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.572310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.572519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.572812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.572866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.573032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.573253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.573284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.573424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.573640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.573668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.573860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.574065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.574109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.574282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.574607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.574654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.574856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.575090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.575119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.575304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.575568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.575620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.575794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.575989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.576017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.576196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.576488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.576513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.576722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.577030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.577086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.577253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.577552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.577607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.050 qpair failed and we were unable to recover it. 00:27:13.050 [2024-07-27 01:35:04.577787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.050 [2024-07-27 01:35:04.578010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.578050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.578235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.578409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.578452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.578598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.578812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.578839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.579032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.579225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.579254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.579445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.579597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.579622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.579843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.580033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.580070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.580291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.580597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.580656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.580864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.581085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.581115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.581328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.581670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.581726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.581916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.582121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.582162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.582370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.582567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.582595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.582784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.583009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.583035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.583198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.583441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.583493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.583655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.583862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.583892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.584113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.584303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.584331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.584527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.584699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.584740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.584922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.585143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.585172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.585340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.585525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.585553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.585745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.585962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.585990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.586198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.586391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.586477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.586694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.586929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.586981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.587176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.587442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.587494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.587695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.587947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.588000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.588230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.588449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.588501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.588696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.588885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.588913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.589134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.589336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.589365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.589589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.589914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.589967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.590158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.590381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.051 [2024-07-27 01:35:04.590407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.051 qpair failed and we were unable to recover it. 00:27:13.051 [2024-07-27 01:35:04.590613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.590801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.590826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.591003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.591208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.591234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.591429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.591620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.591648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.591861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.592027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.592064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.592263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.592452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.592480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.592633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.592789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.592817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.593031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.593237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.593266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.593444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.593789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.593839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.594056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.594250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.594278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.594472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.594733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.594757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.594942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.595144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.595174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.595370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.595613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.595642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.595865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.596076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.596105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.596299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.596538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.596602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.596821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.596966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.597009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.597212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.597425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.597477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.597645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.597859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.597887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.598090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.598237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.598279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.598504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.598704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.598729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.598952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.599154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.599180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.599369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.599561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.599586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.599731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.599924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.599952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.600150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.600333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.600362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.600584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.600859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.600910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.601115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.601312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.601340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.601555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.601719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.601747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.601927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.602178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.602207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.602427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.602605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.602632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.052 qpair failed and we were unable to recover it. 00:27:13.052 [2024-07-27 01:35:04.602847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.603010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.052 [2024-07-27 01:35:04.603038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.603261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.603476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.603504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.603701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.603871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.603897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.604071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.604267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.604293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.604488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.604799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.604857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.605043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.605211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.605238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.605438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.605634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.605663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.605823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.606037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.606072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.606278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.606454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.606480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.606702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.606897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.606925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.607107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.607284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.607310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.607516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.607749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.607775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.607963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.608187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.608213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.608403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.608616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.608641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.608809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.608972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.608998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.609220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.609497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.609541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.609757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.609943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.609972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.610167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.610400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.610468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.610666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.610974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.611030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.611250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.611467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.611522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.611722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.611890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.611918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.612112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.612299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.612328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.612513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.612713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.612740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.612916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.613167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.613196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.613418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.613653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.613678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.613876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.614069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.614094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.614290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.614460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.614490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.614679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.614927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.614977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.615199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.615391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.053 [2024-07-27 01:35:04.615419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.053 qpair failed and we were unable to recover it. 00:27:13.053 [2024-07-27 01:35:04.615623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.615771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.615797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.615987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.616210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.616236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.616406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.616697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.616755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.616968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.617140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.617166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.617364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.617659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.617709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.617923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.618116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.618145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.618339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.618504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.618532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.618746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.618924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.618952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.619143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.619327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.619356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.619543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.619696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.619722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.619897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.620128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.620157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.620351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.620514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.620543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.620761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.620913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.620939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.621111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.621255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.621281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.621476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.621666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.621695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.621866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.622042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.622073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.622238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.622392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.622420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.622604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.622764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.622792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.622993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.623184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.623214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.623444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.623631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.623659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.623879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.624072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.624100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.624321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.624508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.624538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.624720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.625040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.625100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.625314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.625515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.625541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.625743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.626025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.626085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.626274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.626500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.626526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.626747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.626935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.626963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.627159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.627377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.627405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.054 [2024-07-27 01:35:04.627571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.627792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.054 [2024-07-27 01:35:04.627821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.054 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.628018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.628223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.628250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.628421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.628621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.628649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.628834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.629048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.629083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.629273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.629464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.629490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.629693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.629902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.629956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.630145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.630346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.630372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.630531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.630727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.630755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.630979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.631132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.631158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.631326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.631482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.631511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.631699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.631895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.631924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.632100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.632264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.632290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.632475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.632643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.632686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.632879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.633044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.633079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.633280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.633502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.633530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.633740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.633964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.633989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.634212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.634430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.634458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.634621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.634838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.634866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.635036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.635235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.635263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.635455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.635646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.635676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.635900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.636106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.636142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.636340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.636530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.636559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.636745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.636900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.636928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.637137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.637328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.637356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.055 [2024-07-27 01:35:04.637572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.637773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.055 [2024-07-27 01:35:04.637799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.055 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.637994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.638160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.638189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.638389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.638529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.638555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.638773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.638951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.638980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.639205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.639424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.639452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.639686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.639856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.639882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.640123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.640277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.640304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.640528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.640713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.640741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.640972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.641114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.641140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.641309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.641498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.641526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.641720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.641939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.641965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.642174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.642393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.642465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.642632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.642822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.642850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.643036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.643204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.643232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.643419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.643609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.643637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.643937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.644125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.644154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.644351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.644573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.644601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.644803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.644998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.645026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.645236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.645408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.645451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.645668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.645857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.645885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.646086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.646236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.646276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.646512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.646679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.646704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.646900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.647084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.647113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.647298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.647481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.647509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.647663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.647860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.647886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.648067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.648275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.648300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.648449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.648615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.648641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.648873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.649080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.649106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.649277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.649463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.649491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.056 qpair failed and we were unable to recover it. 00:27:13.056 [2024-07-27 01:35:04.649678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.056 [2024-07-27 01:35:04.649866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.649896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.650103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.650322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.650350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.650512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.650673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.650704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.650873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.651020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.651045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.651273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.651433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.651461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.651683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.651832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.651859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.652066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.652235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.652260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.652421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.652676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.652727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.652944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.653134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.653170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.653347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.653552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.653578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.653794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.653993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.654019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.654223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.654439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.654468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.655748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.655979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.656009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.656209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.656404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.656430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.656654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.656814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.656843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.657015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.657252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.657281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.657506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.657708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.657734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.657880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.658112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.658142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.659159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.659360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.659395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.659599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.659795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.659825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.660047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.660238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.660264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.660440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.660660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.660688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.660882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.661096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.661138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.661302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.661517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.661542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.661714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.661902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.661931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.662126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.662345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.662374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.662574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.662752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.662778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.662979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.663132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.663161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.057 qpair failed and we were unable to recover it. 00:27:13.057 [2024-07-27 01:35:04.663333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.663506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.057 [2024-07-27 01:35:04.663548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.663741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.663939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.663966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.664149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.664371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.664400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.664568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.664794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.664823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.665020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.665185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.665215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.665387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.665565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.665590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.665826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.666003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.666028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.666238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.666435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.666464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.666632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.666778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.666804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.666981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.667198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.667225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.667397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.667549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.667577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.667762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.667981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.668009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.668207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.668402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.668431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.668590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.668781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.668807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.668998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.669198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.669225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.669418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.669723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.669774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.669963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.670161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.670187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.670354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.670521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.670547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.670715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.670901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.670927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.671129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.671307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.671350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.671579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.671921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.671981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.672187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.672338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.672380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.672583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.672755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.672780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.672994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.673185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.673211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.673392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.673588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.673616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.673783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.673964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.673992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.674165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.674337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.674379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.674545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.674754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.674779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.674953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.675135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.675162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.058 qpair failed and we were unable to recover it. 00:27:13.058 [2024-07-27 01:35:04.675315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.058 [2024-07-27 01:35:04.675516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.675544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.675758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.675953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.675979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.676148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.676307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.676337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.676539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.676780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.676826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.677033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.677197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.677223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.677383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.677589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.677615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.677757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.677987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.678015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.678213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.678374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.678402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.678579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.678726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.678753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.678986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.679186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.679213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.679368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.679550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.679575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.679727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.679910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.679938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.680120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.680267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.680295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.680513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.680655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.680681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.680906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.681092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.681135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.681287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.681500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.681529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.681743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.681920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.681949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.682166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.682315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.682356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.682573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.682716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.682742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.682931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.683116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.683146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.683302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.683489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.683517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.683744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.683933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.683959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.684112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.684258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.684284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.684459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.684654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.684700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.059 qpair failed and we were unable to recover it. 00:27:13.059 [2024-07-27 01:35:04.684863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.059 [2024-07-27 01:35:04.685071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.685100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.685278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.685483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.685511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.685730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.685940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.685974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.686150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.686298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.686324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.686504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.686698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.686723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.686903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.687102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.687144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.687297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.687481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.687514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.687735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.687888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.687916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.688115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.688262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.688288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.688526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.688715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.688744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.688936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.689125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.689152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.689304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.689520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.689548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.689768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.689959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.689988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.690166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.690318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.690344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.690573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.690798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.690844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.691045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.691220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.691246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.691395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.691541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.691567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.691789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.692028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.692053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.692215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.692361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.692387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.692596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.692741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.692768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.692962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.693165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.693192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.693384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.693622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.693670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.693864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.694069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.694098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.694266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.694443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.694486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.694642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.694858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.694884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.695027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.695185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.695211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.695349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.695521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.695546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.695735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.695885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.695910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.060 [2024-07-27 01:35:04.696090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.696238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.060 [2024-07-27 01:35:04.696263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.060 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.696434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.696614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.696647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.696850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.697083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.697128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.697274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.697430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.697475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.697673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.697837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.697866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.698069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.698232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.698257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.698422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.698642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.698670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.698844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.699024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.699053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.699238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.699408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.699436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.699655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.699851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.699876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.700070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.700233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.700264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.700482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.700653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.700700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.700925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.701112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.701142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.701334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.701506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.701532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.701725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.701918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.701946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.702160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.702328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.702358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.702581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.702754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.702780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.702957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.703168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.703197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.703363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.703526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.703554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.703777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.703972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.704000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.704156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.704337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.704363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.704578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.704839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.704883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.705089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.705253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.705282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.705483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.705646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.705694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.705896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.706119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.706147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.706312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.706528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.706556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.706747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.706907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.706932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.707106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.707301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.707329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.707541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.707752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.707798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.061 qpair failed and we were unable to recover it. 00:27:13.061 [2024-07-27 01:35:04.707989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.061 [2024-07-27 01:35:04.708157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.708186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.708342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.708526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.708554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.708726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.708907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.708936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.709133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.709332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.709360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.709545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.709727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.709777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.710007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.710161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.710187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.710363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.710576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.710609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.710820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.711008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.711036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.711213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.711356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.711381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.711526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.711728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.711757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.711933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.712103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.712145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.712328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.712551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.712597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.712769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.712963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.712991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.713178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.713336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.713369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.713590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.713803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.713835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.714078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.714243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.714271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.714464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.714723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.714768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.714964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.715135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.715161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.715316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.715460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.715502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.715748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.715913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.715939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.716119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.716288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.716317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.716515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.716685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.716711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.716888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.717075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.717102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.717270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.717491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.717526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.717756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.717976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.718005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.718202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.718355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.718402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.718616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.718813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.718859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.719018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.719230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.719257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.062 [2024-07-27 01:35:04.719413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.719645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.062 [2024-07-27 01:35:04.719674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.062 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.719872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.720038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.720072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.720783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.721010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.721040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.721896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.722152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.722183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.722861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.723089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.723120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.723293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.723460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.723489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.723689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.723883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.723912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.724103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.724294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.724322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.724496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.724684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.724712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.724905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.725091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.725120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.725299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.725471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.725497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.725708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.725873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.725915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.726091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.726252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.726281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.726482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.726652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.726694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.726894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.727071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.727114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.727276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.727515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.727561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.727786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.727970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.727998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.728170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.728352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.728381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.728602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.728795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.728824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.729012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.729202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.729231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.729409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.729632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.729661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.729882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.730122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.730148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.730301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.730487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.730516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.730734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.730922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.730950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.731115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.731797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.731830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.732053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.732255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.732284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.732473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.732726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.732773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.732948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.733152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.733182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.733348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.733520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.733553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.063 qpair failed and we were unable to recover it. 00:27:13.063 [2024-07-27 01:35:04.733764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.063 [2024-07-27 01:35:04.733964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.733991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.734199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.734399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.734428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.734619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.734826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.734872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.735072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.735294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.735320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.735544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.735761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.735787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.735963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.736120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.736149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.736346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.736509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.736537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.736711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.736925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.736958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.737114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.737282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.737310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.737531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.737695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.737729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.737949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.738124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.738168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.738363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.738579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.738630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.738828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.739043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.739079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.739256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.739470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.739503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.064 qpair failed and we were unable to recover it. 00:27:13.064 [2024-07-27 01:35:04.739735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.739938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.064 [2024-07-27 01:35:04.739964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.740148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.740334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.740364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.740545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.740762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.740795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.741068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.741231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.741258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.741496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.741669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.741694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.741890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.742080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.742109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.742301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.742493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.742521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.742694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.742868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.742894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.743069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.743232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.743260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.743474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.743666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.743700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.743897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.744609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.744641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.744867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.745087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.745117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.745308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.745522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.745551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.745734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.745892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.745921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.746125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.746319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.746348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.746544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.746772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.746822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.747024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.747198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.747225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.747379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.747596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.747625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.747803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.747978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.748021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.748213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.748410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.748438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.065 qpair failed and we were unable to recover it. 00:27:13.065 [2024-07-27 01:35:04.748632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.065 [2024-07-27 01:35:04.748819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.748848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.749069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.749257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.749286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.749458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.749671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.749721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.749944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.750096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.750123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.750321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.750513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.750541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.750702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.750865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.750894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.751089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.751313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.751339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.751512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.751697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.751726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.751960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.752137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.752164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.752361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.752541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.752569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.752718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.752886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.752928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.753120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.753297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.753324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.753487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.753692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.753740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.753934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.754148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.754178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.754335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.754499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.754527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.754701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.754897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.754925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.755113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.755281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.755310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.755535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.755683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.755708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.755906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.756086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.756115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.756290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.756461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.756486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.756680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.756902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.756930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.757169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.757355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.757384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.757587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.757777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.757805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.757998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.758189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.758218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.758390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.758600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.758652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.758885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.759071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.759097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.759270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.759473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.759521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.759716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.759896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.759922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.066 [2024-07-27 01:35:04.760075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.760221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.066 [2024-07-27 01:35:04.760247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.066 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.760461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.760643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.760671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.760856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.761071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.761100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.761268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.761476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.761523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.761690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.761877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.761907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.762106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.762299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.762327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.762517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.762709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.762737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.762903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.763072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.763117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.763290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.763468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.763497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.763689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.763889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.763915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.764150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.764344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.764370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.764521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.764667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.764692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.764911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.765114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.765144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.765307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.765547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.765593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.765765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.765993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.766021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.766208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.766367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.766396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.766558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.766775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.766803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.767033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.767205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.767235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.767401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.767555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.767584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.767805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.767999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.768027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.768195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.768368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.768396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.768615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.768841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.768869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.769067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.769250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.769278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.769476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.769668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.769697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.769911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.770047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.770098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.770258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.770439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.770465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.770655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.770847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.770873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.771126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.771282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.771308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.771481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.771734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.771782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.067 qpair failed and we were unable to recover it. 00:27:13.067 [2024-07-27 01:35:04.771983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.067 [2024-07-27 01:35:04.772133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.772159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.772330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.772517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.772546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.772701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.772871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.772915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.773072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.773263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.773291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.773474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.773676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.773701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.773866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.774084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.774113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.774280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.774459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.774485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.774674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.774828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.774856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.775052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.775254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.775288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.775475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.775640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.775668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.775853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.776006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.776031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.776182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.776353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.776382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.776596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.776790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.776819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.777009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.777176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.777205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.777373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.777574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.777618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.777832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.778031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.778056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.778274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.778487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.778513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.778719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.778923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.778949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.779142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.779303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.779337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.779511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.779740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.779766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.779992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.780179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.780208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.780409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.780605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.780632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.780777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.780967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.780993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.781182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.781336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.781377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.781596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.781813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.781860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.782078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.782240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.782269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.782435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.782605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.782630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.782778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.783008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.783036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.068 [2024-07-27 01:35:04.783239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.783409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.068 [2024-07-27 01:35:04.783459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.068 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.783701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.783871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.783897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.784078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.784221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.784248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.784418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.784604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.784630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.784803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.784973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.785000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.785192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.785353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.785381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.785576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.785797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.785825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.786041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.786203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.786232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.786405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.786555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.786582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.786814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.787029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.787063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.787223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.787410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.787439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.787660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.787887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.787932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.788121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.788288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.788317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.788497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.788699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.788746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.788920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.789065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.789091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.789232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.789382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.789407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.789574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.789720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.789745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.789886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.790086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.790112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.790290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.790492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.790517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.790738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.790947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.790975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-27 01:35:04.791174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.791323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.069 [2024-07-27 01:35:04.791348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.069 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-27 01:35:04.791568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-27 01:35:04.791765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-27 01:35:04.791813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-27 01:35:04.792013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-27 01:35:04.792173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-27 01:35:04.792220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-27 01:35:04.792394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-27 01:35:04.792603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-27 01:35:04.792650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-27 01:35:04.792811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-27 01:35:04.793015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.340 [2024-07-27 01:35:04.793040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.340 qpair failed and we were unable to recover it. 00:27:13.340 [2024-07-27 01:35:04.793231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.793399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.793427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.793636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.793856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.793884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.794113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.794256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.794282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.794479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.794699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.794732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.794943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.795155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.795184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.795344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.795516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.795558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.795725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.795939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.795972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.796132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.796323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.796351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.796508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.796655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.796683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.796882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.797053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.797083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.797273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.797515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.797562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.797781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.797971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.797999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.798207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.798385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.798410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.798586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.798761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.798787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.798960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.799160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.799189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.799349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.799555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.799600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.799815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.800029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.800057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.800265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.800463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.800509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.800726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.800938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.800966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.801200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.801376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.801401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.801567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.801768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.801796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.801975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.802203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.802230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.802393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.802579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.802607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.802794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.802983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.803013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.803218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.803377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.803409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.803619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.803768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.803793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.803995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.804190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.804218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.804381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.804587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.341 [2024-07-27 01:35:04.804634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.341 qpair failed and we were unable to recover it. 00:27:13.341 [2024-07-27 01:35:04.804806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.804970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.804998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.805195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.805366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.805409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.805601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.805789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.805817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.806004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.806219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.806248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.806465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.806659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.806687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.806882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.807087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.807116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.807280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.807486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.807531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.807721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.807940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.807968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.808152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.808390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.808438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.808668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.808856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.808889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.809110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.809325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.809353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.809567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.809839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.809884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.810072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.810286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.810314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.810750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.810987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.811015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.811234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.811445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.811471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.811676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.811933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.811983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.812202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.812388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.812417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.812591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.812828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.812853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.813079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.813307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.813333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.813590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.813846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.813875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.814103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.814278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.814304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.814527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.814706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.814751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.814954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.815128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.815154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.815298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.815461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.815502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.815716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.815920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.815948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.816172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.816360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.816385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.816604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.816812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.816859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.817067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.817259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.817287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.342 qpair failed and we were unable to recover it. 00:27:13.342 [2024-07-27 01:35:04.817467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.342 [2024-07-27 01:35:04.817639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.817681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.817899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.818088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.818121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.818320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.818564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.818611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.818796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.818987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.819015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.819218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.819400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.819428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.819649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.819906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.819962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.820169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.820386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.820414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.820633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.820853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.820903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.821102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.821307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.821335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.821506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.821678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.821703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.821850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.822019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.822045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.822196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.822368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.822396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.822625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.822844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.822872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.823074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.823250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.823276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.823477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.823678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.823704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.823881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.824057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.824095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.824308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.824558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.824604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.824827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.824977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.825002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.825207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.825376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.825405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.825620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.825863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.825915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.826132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.826350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.826376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.826574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.826798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.826826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.827018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.827249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.827276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.827479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.827699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.827747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.827980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.828177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.828207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.828382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.828571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.828599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.828799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.828966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.829008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.829190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.829380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.343 [2024-07-27 01:35:04.829414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.343 qpair failed and we were unable to recover it. 00:27:13.343 [2024-07-27 01:35:04.829632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.829844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.829872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.830068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.830262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.830290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.830482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.830660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.830709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.830914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.831134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.831163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.831361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.831583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.831612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.831805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.832003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.832028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.832213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.832355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.832382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.832592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.832765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.832791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.832984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.833164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.833193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.833368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.833581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.833609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.833802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.833965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.833990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.834213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.834411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.834438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.834608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.834774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.834802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.834991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.835190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.835216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.835387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.835592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.835642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.835846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.836018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.836077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.836236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.836461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.836486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.836675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.836861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.836889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.837040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.837241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.837270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.837453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.837646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.837671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.837835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.838034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.838069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.838280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.838427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.838453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.838624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.838875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.838920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.839152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.839391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.839438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.839658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.839864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.839914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.840117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.840266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.840292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.840465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.840678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.840711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.840919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.841119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.841145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.344 qpair failed and we were unable to recover it. 00:27:13.344 [2024-07-27 01:35:04.841367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.344 [2024-07-27 01:35:04.841532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.841558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.841727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.841886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.841915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.842100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.842315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.842343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.842525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.842729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.842775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.842993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.843188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.843218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.843393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.843557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.843583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.843810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.844002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.844032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.844282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.844450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.844476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.844673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.844844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.844870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.845074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.845222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.845247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.845426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.845640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.845668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.845849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.846069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.846098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.846321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.846649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.846701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.847092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.847328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.847369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.847595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.847842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.847870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.848066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.848272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.848300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.848490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.848701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.848748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.848979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.849120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.849162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.849384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.849571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.849616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.849807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.850023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.850051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.850275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.850544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.850572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.850761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.850937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.850965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.851167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.851322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.851362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.345 qpair failed and we were unable to recover it. 00:27:13.345 [2024-07-27 01:35:04.851570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.851783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.345 [2024-07-27 01:35:04.851829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.852019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.852221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.852250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.852417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.852614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.852639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.852814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.853002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.853031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.853235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.853454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.853488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.853721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.853916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.853944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.854145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.854339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.854368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.854563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.854850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.854913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.855128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.855291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.855320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.855514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.855702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.855731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.855926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.856171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.856196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.856397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.856589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.856614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.856808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.857001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.857029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.857257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.857427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.857455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.857640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.857851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.857903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.858128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.858320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.858348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.858537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.858916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.858968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.859190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.859563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.859617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.859815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.859994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.860036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.860251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.860461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.860494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.860696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.860967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.861018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.861246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.861516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.861568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.861758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.861960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.861988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.862217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.862374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.862407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.862641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.862807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.862838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.863034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.863205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.863234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.863408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.863658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.863697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.863893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.864077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.864106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.864322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.864544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.346 [2024-07-27 01:35:04.864573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.346 qpair failed and we were unable to recover it. 00:27:13.346 [2024-07-27 01:35:04.864788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.864994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.865020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.865228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.865511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.865540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.865727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.865916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.865945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.866154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.866322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.866363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.866589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.866780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.866825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.867023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.867292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.867321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.867515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.867761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.867794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.868009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.868175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.868205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.868400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.868581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.868622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.868878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.869080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.869109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.869301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.869485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.869513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.869666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.869854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.869882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.870043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.870210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.870238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.870472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.870644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.870670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.870864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.871029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.871057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.871291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.871540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.871565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.871810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.872128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.872164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.872379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.872606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.872644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.872852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.873044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.873085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.873292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.873544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.873584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.873806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.874002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.874032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.874266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.874444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.874470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.874627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.874823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.874852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.875047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.875238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.875264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.875433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.875619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.875647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.875814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.876035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.876090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.876291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.876473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.876502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.876703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.876898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.876939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.347 qpair failed and we were unable to recover it. 00:27:13.347 [2024-07-27 01:35:04.877140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.347 [2024-07-27 01:35:04.877281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.877307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.877457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.877694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.877724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.877933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.878111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.878148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.878334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.878575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.878604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.878827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.879048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.879085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.879257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.879440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.879467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.879626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.879828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.879868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.880079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.880255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.880283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.880508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.880686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.880712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.880879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.881052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.881088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.881234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.881391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.881418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.881613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.881814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.881842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.882040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.882234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.882261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.882444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.882611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.882638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.882892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.883092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.883147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.883318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.883462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.883488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.883662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.883837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.883865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.884041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.884201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.884229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.884406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.884599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.884628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.884781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.885011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.885042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.885229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.885408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.885434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.885575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.885763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.885792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.885987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.886161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.886188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.886401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.886598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.886629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.886834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.887008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.887036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.887257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.887432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.887463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.887661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.887854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.887880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.888030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.888302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.888330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.348 qpair failed and we were unable to recover it. 00:27:13.348 [2024-07-27 01:35:04.888564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.888761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.348 [2024-07-27 01:35:04.888793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.888994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.889202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.889229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.889403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.889575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.889608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.889844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.890038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.890078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.890273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.890468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.890496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.890659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.890812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.890841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.891046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.891227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.891259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.891482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.891729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.891781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.891971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.892165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.892196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.892406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.892556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.892583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.892822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.893016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.893049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.893292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.893476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.893501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.893679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.893886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.893922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.894146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.894317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.894346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.894544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.894736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.894764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.894967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.895115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.895142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.895316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.895504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.895534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.895760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.895960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.895990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.896175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.896352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.896378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.896607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.896801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.896828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.897023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.897245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.897280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.897485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.897685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.897732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.897926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.898148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.898179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.898385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.898565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.898597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.898848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.899015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.899043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.899257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.899410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.899454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.899692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.899872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.899900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.900101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.900274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.900304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.349 [2024-07-27 01:35:04.900503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.900770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.349 [2024-07-27 01:35:04.900819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.349 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.901011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.901164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.901202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.901433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.901664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.901727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.901934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.902161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.902188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.902340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.902515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.902547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.902714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.902886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.902928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.903129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.903330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.903381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.903566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.903744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.903796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.904005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.904161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.904188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.904394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.904679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.904711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.904900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.905087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.905117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.905289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.905508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.905539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.905732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.905895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.905935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.906121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.906303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.906350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.906550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.906738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.906779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.906988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.907158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.907185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.907365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.907540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.907566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.907740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.907936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.907964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.908202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.908380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.908415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.908620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.908784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.908814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.909041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.909201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.909232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.909432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.909654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.909703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.909928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.910123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.910163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.910383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.910592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.910631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.350 [2024-07-27 01:35:04.910861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.911052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.350 [2024-07-27 01:35:04.911089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.350 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.911260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.911498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.911551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.911777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.911941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.911972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.912196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.912415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.912444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.912607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.912807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.912838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.913023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.913208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.913235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.913431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.913652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.913684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.913938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.914162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.914192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.914411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.914704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.914753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.914960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.915118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.915143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.915285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.915491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.915530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.915846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.916036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.916073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.916314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.916572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.916625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.916823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.917048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.917084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.917258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.917480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.917511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.917697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.917952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.918003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.918209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.918403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.918457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.918622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.918818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.918876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.919122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.919324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.919350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.919536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.919738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.919766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.919966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.920161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.920192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.920383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.920605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.920638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.920837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.921032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.921081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.921301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.921495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.921523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.921722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.921941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.921973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.922198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.922418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.922464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.922665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.922885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.922914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.923113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.923270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.923295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.351 [2024-07-27 01:35:04.923495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.923688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.351 [2024-07-27 01:35:04.923717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.351 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.923943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.924114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.924143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.924337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.924576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.924624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.924881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.925119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.925149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.925332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.925520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.925568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.925764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.925941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.925968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.926192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.926377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.926406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.926578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.926753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.926796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.926995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.927187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.927217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.927420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.927615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.927663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.927857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.928044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.928086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.928289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.928487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.928514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.928666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.928816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.928842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.929071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.929289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.929315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.929491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.929685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.929711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.929894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.930093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.930122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.930291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.930466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.930508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.930730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.930955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.930984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.931179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.931374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.931400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.931606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.931831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.931857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.932008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.932179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.932223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.932421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.932631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.932681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.932902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.933090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.933120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.933334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.933494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.933524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.933691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.933890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.933939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.934132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.934307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.934334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.934533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.934749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.934797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.934982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.935174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.935201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.935354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.935526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.352 [2024-07-27 01:35:04.935569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.352 qpair failed and we were unable to recover it. 00:27:13.352 [2024-07-27 01:35:04.935745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.935894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.935920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.936115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.936345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.936375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.936671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.936912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.936941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.937123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.937313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.937342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.937541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.937760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.937789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.937981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.938181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.938207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.938386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.938616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.938645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.938841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.939072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.939102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.939273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.939421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.939449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.939648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.939813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.939844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.940033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.940245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.940272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.940417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.940632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.940661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.940863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.941091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.941118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.941289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.941482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.941511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.941793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.942014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.942041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.942273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.942479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.942508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.942710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.942880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.942905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.943140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.943313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.943340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.943529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.943685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.943713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.943928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.944125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.944156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.944330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.944503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.944545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.944762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.944954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.944982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.945202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.945348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.945375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.945606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.945822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.945848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.946020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.946222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.946248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.946456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.946631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.946657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.946890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.947083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.947112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.947311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.947529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.947576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.353 qpair failed and we were unable to recover it. 00:27:13.353 [2024-07-27 01:35:04.947743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.353 [2024-07-27 01:35:04.947918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.947945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.948114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.948333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.948361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.948616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.948885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.948929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.949153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.949350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.949377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.949571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.949777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.949824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.950010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.950201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.950230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.950392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.950560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.950586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.950737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.950943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.950972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.951174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.951322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.951365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.951532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.951730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.951755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.951939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.952126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.952155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.952316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.952537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.952563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.952741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.952971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.953000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.953226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.953419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.953449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.953696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.953918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.953944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.954163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.954354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.954380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.954583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.954850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.954899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.955118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.955291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.955319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.955594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.955871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.955897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.956069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.956267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.956292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.956466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.956615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.956658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.956889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.957111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.957140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.957314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.957594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.957643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.957847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.957996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.958022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.354 qpair failed and we were unable to recover it. 00:27:13.354 [2024-07-27 01:35:04.958230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.354 [2024-07-27 01:35:04.958451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.958502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.958700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.958892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.958921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.959153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.959325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.959353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.959575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.959783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.959832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.960050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.960262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.960291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.960470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.960642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.960682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.960856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.961043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.961078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.961281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.961470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.961504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.961723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.961926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.961952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.962142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.962300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.962330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.962573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.962830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.962885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.963080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.963237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.963268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.963461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.963632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.963685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.963883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.964120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.964146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.964301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.964486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.964512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.964705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.964893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.964922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.965140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.965328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.965356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.965526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.965720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.965749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.965947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.966101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.966144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.966365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.966586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.966612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.966790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.966936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.966967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.967163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.967363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.967389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.967610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.967810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.967837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.967995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.968206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.968235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.968423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.968633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.968685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.968883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.969092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.969134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.969359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.969628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.969654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.969887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.970077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.970107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.355 qpair failed and we were unable to recover it. 00:27:13.355 [2024-07-27 01:35:04.970280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.355 [2024-07-27 01:35:04.970583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.970632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.970804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.970994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.971023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.971204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.971369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.971402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.971620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.971836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.971862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.972052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.972261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.972287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.972482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.972721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.972767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.972988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.973174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.973203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.973398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.973642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.973692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.973889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.974122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.974151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.974347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.974744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.974793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.974989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.975158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.975187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.975406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.975626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.975674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.975865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.976089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.976119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.976343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.976635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.976684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.976963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.977181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.977212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.977410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.977629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.977675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.977834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.978088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.978131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.978309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.978567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.978607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.978834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.979052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.979089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.979297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.979570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.979619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.979782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.980004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.980033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.980234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.980399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.980428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.980623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.980836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.980884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.981114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.981305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.981333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.981522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.981738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.981766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.981956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.982144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.982173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.982338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.982547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.982572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.982869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.983064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.983093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.356 qpair failed and we were unable to recover it. 00:27:13.356 [2024-07-27 01:35:04.983288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.983459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.356 [2024-07-27 01:35:04.983486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.983689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.983901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.983940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.984143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.984333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.984363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.984559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.984746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.984774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.984981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.985181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.985210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.985406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.985672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.985719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.985933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.986153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.986198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.986420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.986602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.986631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.986803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.987021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.987049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.987240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.987415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.987442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.987590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.987738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.987764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.987938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.988136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.988166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.988359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.988607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.988657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.988854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.989000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.989026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.989225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.989421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.989468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.989692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.989920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.989949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.990149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.990369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.990398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.990593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.990808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.990855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.991076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.991263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.991293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.991515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.991724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.991773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.991993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.992199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.992229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.992447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.992697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.992747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.992941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.993140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.993171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.993368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.993610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.993659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.993892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.994093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.994123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.994328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.994504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.994530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.994708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.994855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.994896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.995070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.995266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.995293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.357 [2024-07-27 01:35:04.995472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.995643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.357 [2024-07-27 01:35:04.995669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.357 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.995868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.996079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.996122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.996276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.996451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.996493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.996716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.996891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.996917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.997088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.997261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.997288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.997483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.997686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.997712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.997932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.998154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.998204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.998403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.998624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.998653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.998847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.999012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.999042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.999244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.999463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.999513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:04.999771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:04.999978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.000007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.000217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.000408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.000438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.000659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.000856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.000885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.001071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.001265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.001294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.001485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.001629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.001656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.001863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.002043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.002080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.002307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.002532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.002561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.002757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.002957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.002986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.003194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.003403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.003450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.003609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.003823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.003851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.004028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.004189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.004216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.004435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.004681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.004729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.004926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.005105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.005135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.005363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.005607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.005654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.005848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.006069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.006099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.006322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.006516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.006570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.006793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.006985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.007014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.007199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.007389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.007443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.007641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.007877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.358 [2024-07-27 01:35:05.007925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.358 qpair failed and we were unable to recover it. 00:27:13.358 [2024-07-27 01:35:05.008121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.008308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.008338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.008543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.008800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.008850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.009077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.009269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.009298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.009473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.009650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.009675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.009851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.010069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.010098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.010320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.010599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.010649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.010848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.011065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.011094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.011289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.011479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.011508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.011699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.011894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.011924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.012131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.012312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.012355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.012519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.012723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.012756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.012956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.013178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.013208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.013402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.013590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.013619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.013848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.014053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.014087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.014280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.014464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.014494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.014693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.014895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.014921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.015088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.015264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.015290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.015457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.015723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.015773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.016003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.016158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.016186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.016392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.016616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.016644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.016839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.017048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.017083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.017277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.017467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.017500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.017722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.017897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.017924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.018097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.018273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.018299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.359 qpair failed and we were unable to recover it. 00:27:13.359 [2024-07-27 01:35:05.018471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.359 [2024-07-27 01:35:05.018617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.018644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.018822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.019041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.019077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.019241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.019411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.019441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.019637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.019852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.019902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.020124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.020293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.020322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.020521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.020711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.020757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.020976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.021160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.021189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.021368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.021571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.021617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.021786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.021977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.022006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.022191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.022344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.022387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.022551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.022727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.022753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.022930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.023129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.023158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.023334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.023543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.023569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.023770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.023960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.023989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.024193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.024383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.024428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.024650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.024853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.024898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.025069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.025264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.025292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.025490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.025681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.025726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.025944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.026135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.026164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.026336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.026512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.026539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.026716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.026854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.026879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.027062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.027239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.027267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.027437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.027626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.027655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.027872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.028054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.028096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.028272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.028465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.028494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.028695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.028839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.028882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.029074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.029243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.029268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.029439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.029580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.360 [2024-07-27 01:35:05.029606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.360 qpair failed and we were unable to recover it. 00:27:13.360 [2024-07-27 01:35:05.029755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.029973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.030002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.030211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.030387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.030413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.030605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.030810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.030856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.031075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.031226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.031251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.031450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.031612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.031642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.031839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.032044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.032078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.032251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.032461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.032511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.032699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.032877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.032904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.033134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.033305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.033333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.033504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.033652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.033679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.033851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.033993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.034034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.034217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.034403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.034431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.034614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.034822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.034867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.035056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.035219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.035263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.035428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.035575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.035601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.035838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.036035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.036071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.036273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.036473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.036503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.036652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.036849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.036875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.037073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.037267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.037296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.037494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.037665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.037690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.037913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.038122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.038151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.038334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.038535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.038560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.038734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.038921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.038950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.039146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.039342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.039371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.039558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.039779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.039804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.039975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.040201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.040230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.040463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.040638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.040669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.040843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.041066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.041094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.361 qpair failed and we were unable to recover it. 00:27:13.361 [2024-07-27 01:35:05.041293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.361 [2024-07-27 01:35:05.041469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.041495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.041674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.041815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.041841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.042085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.042304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.042334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.042546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.042809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.042858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.043054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.043228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.043254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.043461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.043662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.043691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.043921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.044087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.044116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.044285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.044518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.044562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.044785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.044951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.044979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.045182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.045370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.045414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.045641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.045787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.045814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.045988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.046172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.046201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.046361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.046605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.046655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.046852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.047047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.047083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.047250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.047464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.047492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.047689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.047936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.047988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.048213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.048384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.048413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.048579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.048754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.048798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.049022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.049232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.049259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.049476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.049654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.049679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.049869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.050037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.050078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.050304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.050518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.050547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.050705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.050893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.050922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.051087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.051237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.051263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.051495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.051689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.051717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.051915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.052166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.052193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.052415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.052636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.052684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.052873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.053068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.053097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.362 qpair failed and we were unable to recover it. 00:27:13.362 [2024-07-27 01:35:05.053263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.362 [2024-07-27 01:35:05.053462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.053489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.053695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.053867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.053894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.054129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.054346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.054371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.054546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.054777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.054823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.055008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.055229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.055256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.055460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.055694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.055739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.055957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.056157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.056183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.056360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.056538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.056565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.056784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.056983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.057010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.057201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.057418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.057469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.057665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.057879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.057908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.058166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.058384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.058413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.058618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.058854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.058899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.059101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.059302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.059331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.059527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.059780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.059826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.060028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.060237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.060263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.060445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.060679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.060725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.060913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.061090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.061134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.061300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.061486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.061515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.061682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.061873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.061902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.062102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.062317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.062345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.062580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.062733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.062759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.062929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.063126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.063155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.063329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.063505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.063531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.063710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.063938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.063967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.064167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.064337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.064363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.064505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.064672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.064697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.064876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.065031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.065067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.363 qpair failed and we were unable to recover it. 00:27:13.363 [2024-07-27 01:35:05.065234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.065451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.363 [2024-07-27 01:35:05.065477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.065651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.065846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.065874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.066071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.066226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.066256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.066481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.066682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.066726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.066951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.067118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.067148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.067322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.067466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.067507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.067708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.067903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.067929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.068134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.068303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.068330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.068540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.068759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.068785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.068982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.069128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.069155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.069326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.069490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.069521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.069737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.069888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.069914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.070116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.070314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.070342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.070520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.070721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.070747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.070967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.071159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.071190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.071412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.071662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.071713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.071932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.072172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.072203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.072424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.072626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.072653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.072832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.073023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.073052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.073281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.073487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.073533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.073723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.073891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.073921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.074131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.074327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.074356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.074550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.074764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.074793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.074985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.075155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.075187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.364 qpair failed and we were unable to recover it. 00:27:13.364 [2024-07-27 01:35:05.075376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.364 [2024-07-27 01:35:05.075590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.075619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.075807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.075978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.076005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.076206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.076354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.076381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.076554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.076721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.076746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.076948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.077169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.077196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.077368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.077514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.077541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.077778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.077968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.077997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.078200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.078392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.078421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.078605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.078784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.078814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.079014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.079167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.079194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.079423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.079621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.079648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.079849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.080018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.080046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.080249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.080440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.080469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.080690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.080859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.080885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.081087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.081284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.081312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.081512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.081686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.081713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.081895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.082091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.082119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.082320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.082465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.082492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.082714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.082903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.082931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.083130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.083321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.083348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.083520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.083671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.083696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.083866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.084082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.084126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.084308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.084575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.084625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.084841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.085034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.365 [2024-07-27 01:35:05.085070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.365 qpair failed and we were unable to recover it. 00:27:13.365 [2024-07-27 01:35:05.085290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.085483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.085510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.085689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.085840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.085866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.086055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.086251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.086280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.086456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.086633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.086660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.086839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.087009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.087035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.087175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9504b0 is same with the state(5) to be set 00:27:13.638 [2024-07-27 01:35:05.087400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.087593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.087625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.087822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.088035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.088072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.088267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.088403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.088445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.088773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.089070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.089098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.089290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.089489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.089519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.089737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.090050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.090112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.090310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.090524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.090553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.090834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.091057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.091090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.638 qpair failed and we were unable to recover it. 00:27:13.638 [2024-07-27 01:35:05.091241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.091480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.638 [2024-07-27 01:35:05.091505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.091889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.092108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.092136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.092342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.092544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.092573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.092763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.092955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.092983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.093178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.093327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.093353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.093547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.093713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.093742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.093925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.094148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.094175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.094353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.094531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.094560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.094744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.094934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.094962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.095163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.095339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.095365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.095558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.095717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.095745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.095941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.096140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.096166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.096358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.096546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.096592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.096892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.097129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.097155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.097326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.097511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.097540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.097748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.097939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.097967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.098164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.098353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.098420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.098722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.098888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.098916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.099114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.099265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.099291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.099444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.099649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.099679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.099897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.100126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.100153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.100427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.100826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.100885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.101116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.101263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.101293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.101469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.101659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.101687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.101868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.102081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.102124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.102276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.102446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.102486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.102789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.103034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.103069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.103266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.103497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.103549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.639 [2024-07-27 01:35:05.103782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.104013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.639 [2024-07-27 01:35:05.104040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.639 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.104227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.104367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.104408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.104624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.104806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.104834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.105050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.105223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.105249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.105400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.105626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.105654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.105893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.106054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.106089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.106283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.106499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.106544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.106835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.107022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.107052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.107261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.107506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.107566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.107765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.107957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.107985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.108176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.108326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.108352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.108583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.108897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.108956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.109149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.109437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.109491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.109663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.109866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.109906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.110086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.110322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.110351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.110571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.110905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.110959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.111160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.111324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.111352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.111542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.111774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.111828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.112024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.112249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.112278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.112481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.112678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.112703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.112908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.113110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.113136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.113305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.113617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.113645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.113852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.114034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.114068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.114251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.114438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.114467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.114658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.114870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.114916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.115126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.115414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.115443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.115604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.115847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.115872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.116047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.116244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.116272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.116495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.116716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.640 [2024-07-27 01:35:05.116768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-27 01:35:05.116961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.117152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.117182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.117410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.117657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.117685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.117928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.118130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.118159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.118347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.118551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.118577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.118754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.118917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.118943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.119148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.119294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.119320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.119579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.119772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.119805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.120074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.120260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.120288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.120480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.120668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.120696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.120896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.121127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.121153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.121350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.121566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.121594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.121791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.121987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.122015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.122232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.122422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.122451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.122639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.122872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.122898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.123101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.123277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.123305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.123521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.123709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.123738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.123963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.124155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.124183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.124386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.124556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.124584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.124779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.124990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.125018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.125218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.125362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.125387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.125557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.125914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.125970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.126190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.126375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.126404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.126622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.126867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.126892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.127037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.127194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.127220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.127371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.127548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.127591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.127813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.128072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.128101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.128302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.128498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.128524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-27 01:35:05.128709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.128893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.641 [2024-07-27 01:35:05.128921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.129095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.129244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.129286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.129517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.129702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.129751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.129937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.130132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.130162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.130381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.130552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.130577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.130722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.130939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.130967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.131195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.131394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.131423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.131591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.131804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.131833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.132093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.132342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.132367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.132626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.132828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.132856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.133043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.133312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.133340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.133558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.133764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.133793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.133969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.134201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.134230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.134427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.134618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.134646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.134846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.135046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.135079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.135272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.135477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.135503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.135673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.135886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.135914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.136116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.136280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.136306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.136529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.136745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.136773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.136987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.137181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.137210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.137436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.137612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.137663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.137879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.138105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.138134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.138333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.138585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.138638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.138840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.139005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.139031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.139242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.139431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.139460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.139646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.139847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.139872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.140048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.140245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.140272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.140461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.140661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.140687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.140865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.141053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.141087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.642 qpair failed and we were unable to recover it. 00:27:13.642 [2024-07-27 01:35:05.141292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.642 [2024-07-27 01:35:05.141465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.141491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.141687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.141842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.141875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.142098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.142314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.142343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.142543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.142687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.142713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.142855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.143052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.143086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.143286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.143504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.143533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.143730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.143920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.143948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.144149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.144436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.144481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.144703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.144879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.144904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.145080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.145255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.145281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.145467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.145757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.145807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.146004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.146152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.146178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.146323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.146525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.146552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.146733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.146876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.146901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.147142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.147281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.147307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.147481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.147661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.147686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.147883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.148101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.148130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.148314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.148476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.148504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.148673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.148842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.148885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.149085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.149229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.149255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.149420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.149650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.149698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.149928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.150077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.150104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.150259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.150566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.150628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.643 qpair failed and we were unable to recover it. 00:27:13.643 [2024-07-27 01:35:05.150855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.151009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.643 [2024-07-27 01:35:05.151037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.151257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.151458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.151486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.151702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.151918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.151946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.152140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.152311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.152336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.152483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.152667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.152695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.152915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.153139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.153168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.153358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.153551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.153579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.153770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.153921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.153947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.154123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.154347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.154376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.154591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.154864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.154920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.155128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.155280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.155306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.155520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.155691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.155717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.155870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.156057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.156091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.156289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.156459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.156485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.156669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.156835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.156863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.157127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.157328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.157372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.157577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.157771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.157799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.158016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.158214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.158242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.158403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.158590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.158618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.158823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.159037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.159074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.159275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.159418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.159444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.159643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.159856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.159884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.160079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.160255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.160280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.160497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.160691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.160720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.160919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.161098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.161141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.161314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.161526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.161555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.161724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.161923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.161949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.162094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.162241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.162267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.162467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.162634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.644 [2024-07-27 01:35:05.162660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.644 qpair failed and we were unable to recover it. 00:27:13.644 [2024-07-27 01:35:05.162854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.163016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.163051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.163283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.163505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.163557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.163781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.163928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.163954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.164155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.164362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.164390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.164611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.164777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.164807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.165022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.165204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.165230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.165409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.165596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.165624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.165798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.165971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.165997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.166173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.166464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.166523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.166740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.166922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.166950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.167141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.167357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.167385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.167614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.167784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.167812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.167997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.168222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.168250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.168416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.168591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.168616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.168790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.168965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.168991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.169166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.169380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.169409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.169594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.169788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.169816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.170007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.170170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.170199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.170392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.170569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.170595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.170811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.171007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.171032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.171242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.171406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.171434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.171631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.171821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.171849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.172005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.172188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.172217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.172385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.172618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.172657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.172869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.173067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.173096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.173302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.173475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.173518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.173685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.173833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.173859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.174029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.174293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.174322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.645 qpair failed and we were unable to recover it. 00:27:13.645 [2024-07-27 01:35:05.174518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.645 [2024-07-27 01:35:05.174693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.174720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.174922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.175129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.175156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.175301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.175510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.175538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.175721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.175943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.175969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.176119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.176288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.176329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.176546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.176863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.176919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.177110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.177328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.177357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.177523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.177713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.177741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.177939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.178120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.178146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.178339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.178488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.178516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.178722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.178969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.178994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.179234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.179427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.179456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.179673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.179888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.179917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.180095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.180243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.180272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.180460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.180616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.180644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.180820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.180994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.181036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.181240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.181388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.181414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.181618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.182434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.182468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.182692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.182877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.182906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.183108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.183308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.183334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.183529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.183687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.183716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.183981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.184242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.184271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.184471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.184624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.184650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.184825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.185010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.185038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.185243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.185427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.185455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.185653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.185830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.185855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.186033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.186239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.186267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.186452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.186651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.186676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.186849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.187014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.646 [2024-07-27 01:35:05.187043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.646 qpair failed and we were unable to recover it. 00:27:13.646 [2024-07-27 01:35:05.187251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.187470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.187499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.187687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.187871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.187899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.188077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.188249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.188292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.188482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.188696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.188722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.188917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.189090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.189117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.189323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.189498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.189524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.189730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.189947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.189975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.190135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.190324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.190353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.190554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.190724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.190750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.190977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.191131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.191160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.191348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.191568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.191596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.191789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.191978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.192006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.192197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.192379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.192408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.192595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.192768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.192814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.193039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.193238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.193267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.193490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.193684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.193710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.193916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.194051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.194084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.194221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.194443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.194489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.194711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.194885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.194926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.195127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.195330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.195373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.195571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.195767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.195793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.196030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.196215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.196241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.196447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.196683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.196708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.196888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.197080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.197109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.647 qpair failed and we were unable to recover it. 00:27:13.647 [2024-07-27 01:35:05.197300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.647 [2024-07-27 01:35:05.197517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.197543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.197773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.197963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.197992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.198193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.198343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.198372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.198536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.198719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.198747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.198929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.199128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.199157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.199341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.199482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.199524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.199727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.199921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.199950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.200165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.200357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.200388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.200591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.200841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.200883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.201082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.201238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.201266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.201463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.201773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.201831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.202095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.202318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.202363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.202554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.202875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.202935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.203169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.203332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.203362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.203588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.203741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.203768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.203963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.204157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.204187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.204395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.204537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.204563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.204735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.204913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.204939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.205149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.205324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.205350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.205576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.205863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.205925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.206178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.206356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.206382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.206588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.206793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.206838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.207035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.207248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.207277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.207486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.207770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.207840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.208119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.208336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.208372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.208593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.208944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.208995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.209203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.209459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.209515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.209731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.210085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.648 [2024-07-27 01:35:05.210156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.648 qpair failed and we were unable to recover it. 00:27:13.648 [2024-07-27 01:35:05.210348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.210569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.210594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.210805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.210952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.210978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.211131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.211313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.211352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.211566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.211770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.211795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.211985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.212231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.212261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.212466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.212655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.212720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.212915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.213147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.213173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.213349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.213618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.213651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.213855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.214083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.214125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.214283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.214500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.214528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.214756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.214980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.215020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.215242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.215434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.215466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.215679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.215867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.215895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.216056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.216267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.216297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.216507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.216699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.216729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.216917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.217086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.217115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.217386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.217551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.217579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.217774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.217997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.218025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.218285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.218520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.218548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.218736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.218960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.218988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.219186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.219362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.219401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.219596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.219783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.219811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.220016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.220246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.220275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.220470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.220685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.220713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.220979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.221209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.221238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.221428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.221732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.221791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.221989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.222163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.222189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.222388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.222564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.222604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.649 qpair failed and we were unable to recover it. 00:27:13.649 [2024-07-27 01:35:05.222786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.223154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.649 [2024-07-27 01:35:05.223179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.223388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.223635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.223681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.223896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.224101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.224130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.224329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.224518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.224546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.224760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.224921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.224950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.225142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.225333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.225384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.225552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.225748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.225839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.226072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.226257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.226285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.226504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.226709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.226755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.226925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.227128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.227158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.227367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.227600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.227626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.227846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.228036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.228077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.228262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.228416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.228453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.228658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.228848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.228878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.229149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.229352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.229393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.229597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.229762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.229791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.229981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.230163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.230191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.230388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.230607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.230637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.230861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.231013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.231055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.231276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.231520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.231565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.231789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.231985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.232012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.232186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.232378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.232407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.232612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.232830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.232870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.233070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.233270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.233298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.233501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.233757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.233802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.233970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.234181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.234208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.234413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.234690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.234718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.234895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.235090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.235126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.235325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.235534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.650 [2024-07-27 01:35:05.235560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.650 qpair failed and we were unable to recover it. 00:27:13.650 [2024-07-27 01:35:05.235736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.235951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.235980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.236198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.236397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.236425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.236641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.236785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.236810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.237051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.237273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.237301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.237522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.237749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.237799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.238089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.238286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.238311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.238545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.238763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.238789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.238986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.239176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.239202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.239390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.239652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.239696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.239887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.240105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.240135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.240324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.240465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.240491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.240651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.240830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.240861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.241086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.241241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.241267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.241469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.241673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.241718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.241935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.242111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.242142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.242370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.242597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.242626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.242897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.243095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.243125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.243325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.243534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.243560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.243924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.244135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.244171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.244393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.244694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.244751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.244972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.245127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.245156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.245345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.245619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.245647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.245912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.246111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.246140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.246306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.246562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.246617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.246814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.247027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.247056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.247287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.247480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.247509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.247702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.247921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.247966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.248194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.248409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.651 [2024-07-27 01:35:05.248438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.651 qpair failed and we were unable to recover it. 00:27:13.651 [2024-07-27 01:35:05.248635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.248956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.248985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.249191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.249487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.249541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.249725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.249941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.249970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.250145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.250316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.250342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.250506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.250709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.250738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.250928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.251128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.251158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.251379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.251599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.251653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.251843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.252068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.252109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.252264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.252456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.252488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.252725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.252990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.253043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.253253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.253507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.253560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.253749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.253963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.253992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.254180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.254354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.254379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.254625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.254819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.254848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.255038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.255238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.255266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.255484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.255639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.255665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.255860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.256042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.256080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.256308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.256592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.256647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.256839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.257044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.257082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.257312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.257612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.257664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.257877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.258107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.258134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.258317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.258490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.258515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.258727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.258928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.258956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.259163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.259310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.259335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.652 qpair failed and we were unable to recover it. 00:27:13.652 [2024-07-27 01:35:05.259488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.652 [2024-07-27 01:35:05.259791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.259846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.260018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.260222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.260251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.260413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.260598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.260627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.260848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.261020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.261046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.261273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.261491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.261520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.261713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.261927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.261956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.262158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.262353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.262382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.262599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.262798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.262824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.263004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.263209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.263237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.263414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.263674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.263727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.263950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.264166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.264192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.264363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.264601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.264647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.264841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.265017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.265043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.265229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.265417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.265449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.265686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.265877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.265905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.266077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.266269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.266297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.266569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.266862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.266890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.267081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.267232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.267262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.267413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.267631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.267660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.267850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.268038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.268075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.268267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.268560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.268620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.268818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.269015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.269041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.269248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.269515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.269543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.269761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.270036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.270072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.270303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.270554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.270580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.270803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.271000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.271028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.271244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.271468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.271513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.271733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.271919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.271948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.653 [2024-07-27 01:35:05.272178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.272443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.653 [2024-07-27 01:35:05.272472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.653 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.272693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.272997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.273048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.273258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.273417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.273445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.273637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.273903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.273958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.274185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.274339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.274364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.274543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.274747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.274793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.275019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.275229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.275259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.275478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.275669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.275694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.275893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.276085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.276114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.276312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.276494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.276523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.276740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.277009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.277054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.277260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.277414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.277440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.277642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.277866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.277911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.278088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.278235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.278260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.278472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.278762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.278834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.279031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.279263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.279293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.279514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.279858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.279915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.280122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.280344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.280373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.280562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.280775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.280803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.281003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.281180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.281221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.281425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.281630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.281672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.281804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.281984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.282027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.282233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.282395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.282457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.282672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.283037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.283112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.283326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.283559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.283584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.283834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.284050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.284085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.284300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.284498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.284527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.284817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.285056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.285093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.285313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.285531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.654 [2024-07-27 01:35:05.285580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.654 qpair failed and we were unable to recover it. 00:27:13.654 [2024-07-27 01:35:05.285755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.285931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.285957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.286215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.286416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.286442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.286732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.286993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.287023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.287231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.287396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.287422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.287618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.287804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.287835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.288078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.288287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.288313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.288557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.288700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.288725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.288889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.289111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.289140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.289331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.289531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.289557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.289755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.289972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.290000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.290202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.290369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.290412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.290612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.290942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.290992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.291205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.291469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.291521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.291691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.291883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.291910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.292092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.292281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.292306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.292481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.292668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.292696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.292892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.293120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.293147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.293337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.293650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.293701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.293892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.294049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.294086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.294268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.294430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.294457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.294628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.294800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.294827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.294998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.295211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.295240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.295413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.295617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.295644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.295823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.296024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.296051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.296294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.296624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.296686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.296871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.297084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.297123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.297317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.297499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.297528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.297699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.297837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.297880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.655 qpair failed and we were unable to recover it. 00:27:13.655 [2024-07-27 01:35:05.298094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.298287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.655 [2024-07-27 01:35:05.298315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.298498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.298758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.298809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.299041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.299235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.299261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.299429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.299751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.299811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.300046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.300257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.300285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.300483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.300674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.300703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.300885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.301048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.301097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.301312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.301628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.301681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.301871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.302050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.302086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.302300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.302678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.302738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.302963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.303193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.303219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.303397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.303544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.303571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.303803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.303981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.304010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.304225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.304369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.304396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.304541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.304691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.304719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.304940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.305143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.305172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.305388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.305611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.305640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.305859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.306065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.306095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.306292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.306517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.306582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.306761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.306933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.306975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.307141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.307333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.307362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.307562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.307758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.307784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.308012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.308232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.308261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.308465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.308711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.308772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.308969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.309148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.309178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.309328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.309634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.309686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.309910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.310070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.310098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.656 [2024-07-27 01:35:05.310268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.310491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.656 [2024-07-27 01:35:05.310520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.656 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.310746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.310945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.310974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.311149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.311368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.311397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.311596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.311788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.311820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.311996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.312205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.312231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.312397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.312613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.312663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.312883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.313075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.313116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.313302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.313500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.313527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.313708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.313907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.313936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.314137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.314285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.314311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.314506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.314708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.314734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.314937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.315273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.315324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.315522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.315722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.315748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.315888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.316067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.316094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.316279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.316500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.316531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.316724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.317024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.317094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.317345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.317732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.317794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.318012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.318215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.318245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.318477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.318641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.318670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.318829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.319047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.319087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.319267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.319404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.319446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.319644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.319842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.319869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.320045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.320250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.320279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.320469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.320749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.320794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.321017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.321191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.321220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.657 qpair failed and we were unable to recover it. 00:27:13.657 [2024-07-27 01:35:05.321380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.657 [2024-07-27 01:35:05.321558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.321619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.321838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.321999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.322028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.322233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.322576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.322632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.322830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.323056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.323097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.323294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.323555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.323609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.323806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.324025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.324052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.324236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.324436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.324466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.324688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.324838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.324865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.325084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.325282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.325308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.325448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.325692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.325743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.325960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.326123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.326150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.326316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.326596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.326648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.326878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.327044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.327079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.327284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.327604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.327656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.327880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.328052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.328084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.328292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.328509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.328535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.328707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.328995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.329049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.329264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.329489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.329518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.329678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.329856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.329898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.330085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.330243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.330270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.330556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.330878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.330948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.331145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.331327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.331356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.331544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.331693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.331720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.331944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.332172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.332203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.332383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.332558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.332584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.332805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.332991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.333018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.333209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.333467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.333521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.333736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.333899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.333929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.658 [2024-07-27 01:35:05.334149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.334343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.658 [2024-07-27 01:35:05.334374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.658 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.334569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.334757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.334787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.335004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.335222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.335252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.335450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.335745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.335775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.335974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.336169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.336197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.336433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.336646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.336672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.336893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.337086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.337121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.337290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.337532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.337558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.337749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.337964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.337994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.338221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.338409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.338438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.338592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.338897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.338956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.339160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.339326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.339355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.339538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.339826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.339877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.340094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.340344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.340396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.340585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.340770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.340800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.341050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.341274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.341300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.341548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.341780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.341834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.342048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.342248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.342276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.342442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.342618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.342661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.342823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.343051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.343085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.343251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.343444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.343472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.343671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.343823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.343849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.344048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.344283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.344311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.344508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.344726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.344777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.344967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.345188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.345215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.345388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.345589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.345616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.345778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.346010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.346040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.346240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.346463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.346524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.346739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.346920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.659 [2024-07-27 01:35:05.346949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.659 qpair failed and we were unable to recover it. 00:27:13.659 [2024-07-27 01:35:05.347167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.347364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.347391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.347589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.347807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.347836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.348053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.348264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.348290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.348522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.348797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.348844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.349031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.349272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.349301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.349543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.349753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.349810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.350025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.350231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.350259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.350452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.350606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.350633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.350853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.351078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.351113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.351305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.351631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.351686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.351881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.352112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.352138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.352290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.352512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.352541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.352750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.352934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.352963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.353163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.353340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.353384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.353575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.353846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.353899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.354089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.354285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.354312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.354522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.354713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.354743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.354959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.355194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.355225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.355422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.355750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.355808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.356024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.356226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.356253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.356407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.356606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.356636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.356856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.357077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.357104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.357249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.357469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.357564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.357790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.357981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.358010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.358267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.358464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.358493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.358657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.358909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.358960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.359179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.359465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.359522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.359705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.359907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.359934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.660 qpair failed and we were unable to recover it. 00:27:13.660 [2024-07-27 01:35:05.360076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.360226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.660 [2024-07-27 01:35:05.360252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.360476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.360728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.360787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.361008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.361183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.361213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.361402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.361591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.361621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.361846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.361984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.362011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.362227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.362491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.362543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.362720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.362913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.362943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.363137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.363310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.363340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.363527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.363755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.363813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.364008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.364224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.364254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.364428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.364610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.364639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.364807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.365008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.365034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.365219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.365394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.365421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.365589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.365814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.365842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.366041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.366223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.366253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.366441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.366655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.366708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.366898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.367101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.367131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.367307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.367475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.367516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.367737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.367887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.367926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.368133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.368306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.368333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.368534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.368808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.368859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.369053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.369234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.369279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.369495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.369686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.369715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.369942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.370123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.370153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.370321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.370529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.370555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.661 qpair failed and we were unable to recover it. 00:27:13.661 [2024-07-27 01:35:05.370756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.661 [2024-07-27 01:35:05.370946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.370977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.371195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.371411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.371438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.371671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.371869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.371896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.372087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.372322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.372349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.372546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.372696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.372739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.372943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.373134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.373168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.373341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.373725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.373777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.373967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.374198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.374228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.374461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.374641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.374667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.374861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.375083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.375113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.375301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.375496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.375524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.375724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.375870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.375897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.376095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.376289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.376319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.376525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.376715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.376783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.377047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.377230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.377257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.377404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.377605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.377635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.377833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.378001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.378030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.378222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.378363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.378390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.378609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.378800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.378832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.379019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.379254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.379285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.379513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.379775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.379802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.380022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.380200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.380230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.380395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.380554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.380583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.380807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.381028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.381057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.381224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.381394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.381421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.381567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.381772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.381802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.381980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.382157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.382184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.382335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.382512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.662 [2024-07-27 01:35:05.382541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.662 qpair failed and we were unable to recover it. 00:27:13.662 [2024-07-27 01:35:05.382704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.935 [2024-07-27 01:35:05.382886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.935 [2024-07-27 01:35:05.382917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.935 qpair failed and we were unable to recover it. 00:27:13.935 [2024-07-27 01:35:05.383108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.935 [2024-07-27 01:35:05.383306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.935 [2024-07-27 01:35:05.383335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.935 qpair failed and we were unable to recover it. 00:27:13.935 [2024-07-27 01:35:05.383491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.935 [2024-07-27 01:35:05.383649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.935 [2024-07-27 01:35:05.383678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.935 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.383869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.384015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.384041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.384257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.384484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.384513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.384736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.384956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.384985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.385147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.385308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.385337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.385501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.385669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.385713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.385958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.386167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.386202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.386389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.386581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.386612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.386840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.387041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.387105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.387295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.387449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.387478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.387699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.387892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.387925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.388101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.388256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.388284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.388496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.388698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.388729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.388948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.389143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.389173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.389339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.389515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.389543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.389769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.389987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.390017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.390250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.390457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.390487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.390688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.390881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.390911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.391104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.391323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.391351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.391582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.391803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.391831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.392034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.392261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.392289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.392513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.392694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.392724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.392887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.393083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.393114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.393290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.393519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.393549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.393888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.394197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.394228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.394416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.394607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.394639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.394835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.395038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.395087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.395325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.395501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.395529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.395746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.395943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.936 [2024-07-27 01:35:05.395971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.936 qpair failed and we were unable to recover it. 00:27:13.936 [2024-07-27 01:35:05.396176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.396328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.396370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.396593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.396776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.396807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.396995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.397214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.397245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.397417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.397571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.397603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.397782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.398002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.398031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.398243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.398437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.398468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.398667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.398838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.398866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.399106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.399283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.399314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.399532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.399718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.399748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.399925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.400175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.400206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.400409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.400602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.400630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.400806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.400971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.400998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.401156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.401333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.401375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.401573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.401788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.401818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.402013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.402219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.402251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.402456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.402735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.402765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.402938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.403127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.403159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.403382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.403579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.403614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.403777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.403963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.403996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.404229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.404414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.404444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.404639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.404833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.404865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.405115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.405352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.405382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.405579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.405745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.405776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.405995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.406183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.406215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.406399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.406580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.406607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.406789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.406973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.407004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.407200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.407382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.407413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.937 [2024-07-27 01:35:05.407612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.407805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.937 [2024-07-27 01:35:05.407840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.937 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.408036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.408224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.408255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.408446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.408681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.408709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.408919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.409153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.409181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.409374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.409624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.409678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.409879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.410071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.410099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.410256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.410401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.410445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.410683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.410849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.410876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.411037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.411244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.411275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.411466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.411690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.411717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.412025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.412258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.412293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.412528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.412731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.412763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.412994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.413218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.413247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.413452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.413647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.413681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.413906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.414111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.414142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.414367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.414537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.414566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.414786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.414999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.415029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.415230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.415415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.415445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.415681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.415854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.415884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.416078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.416276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.416304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.416540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.416769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.416804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.417005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.417234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.417262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.417437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.417637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.417665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.417940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.418149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.418178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.418357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.418551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.418581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.418749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.418942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.418973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.419199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.419396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.419427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.419656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.419881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.419911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.938 qpair failed and we were unable to recover it. 00:27:13.938 [2024-07-27 01:35:05.420104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.938 [2024-07-27 01:35:05.420291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.420322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.420548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.420743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.420773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.420936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.421120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.421169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.421338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.421542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.421587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.421786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.422004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.422030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.422230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.422422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.422453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.422691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.422917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.422947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.423123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.423308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.423352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.423561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.423755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.423785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.423972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.424169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.424200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.424397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.424615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.424646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.424819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.425198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.425230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.425447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.425829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.425892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.426120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.426339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.426382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.426547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.426732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.426762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.426935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.427102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.427129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.427290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.427535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.427566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.427749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.427899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.427927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.428141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.428356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.428387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.428609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.428823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.428853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.429089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.429265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.429296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.429492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.429677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.429708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.429919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.430116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.430158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.430332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.430508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.430550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.430731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.430963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.430994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.431224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.431392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.431421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.431638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.431837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.431867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.432083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.432343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.432374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.939 qpair failed and we were unable to recover it. 00:27:13.939 [2024-07-27 01:35:05.432566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.939 [2024-07-27 01:35:05.432786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.432816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.433028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.433224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.433251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.433484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.433705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.433738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.433935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.434131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.434159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.434398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.434647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.434690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.434923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.435148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.435179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.435392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.435603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.435634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.435829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.436069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.436100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.436321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.436652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.436704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.436920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.437139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.437170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.437397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.437655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.437685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.437887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.438099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.438130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.438333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.438521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.438551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.438747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.438969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.438998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.439230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.439430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.439457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.439690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.439909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.439938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.440126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.440322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.440352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.440515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.440728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.440757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.440948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.441145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.441177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.441377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.441559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.441586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.441766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.441993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.442020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.442251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.442421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.442451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.442690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.443009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.443070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.940 [2024-07-27 01:35:05.443272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.443451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.940 [2024-07-27 01:35:05.443493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.940 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.443696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.444048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.444121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.444339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.444528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.444558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.444754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.444963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.444989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.445181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.445374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.445404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.445580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.445829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.445856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.446037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.446259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.446302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.446489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.446710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.446739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.446967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.447212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.447242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.447442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.447653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.447691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.448094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.448264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.448295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.448485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.448706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.448735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.448920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.449143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.449175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.449374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.449547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.449575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.449748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.449915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.449951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.450158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.450378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.450452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.450827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.451084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.451111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.451305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.451510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.451546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.451731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.451910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.451953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.452116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.452292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.452324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.452530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.452809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.452866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.453095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.453283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.453312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.453478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.453669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.453701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.453946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.454102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.454132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.454301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.454528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.454556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.454739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.454923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.454951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.455102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.455297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.455324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.455534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.455720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.455791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.941 [2024-07-27 01:35:05.455988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.456221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.941 [2024-07-27 01:35:05.456254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.941 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.456482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.456812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.456868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.457076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.457307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.457334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.457520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.457688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.457718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.457915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.458146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.458177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.458407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.458736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.458787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.459007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.459192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.459224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.459416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.459750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.459810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.460037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.460257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.460287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.460458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.460680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.460712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.460910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.461132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.461163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.461385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.461614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.461648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.461843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.462044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.462082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.462262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.462433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.462461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.462648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.462851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.462879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.463054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.463220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.463257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.463410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.463613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.463640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.463834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.464031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.464071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.464298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.464584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.464641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.464825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.465004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.465035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.465226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.465453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.465495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.465659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.465901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.465930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.466140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.466323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.466369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.466573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.466761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.466832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.467005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.467183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.467214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.467433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.467773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.467833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.468022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.468237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.468265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.468447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.468626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.468655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.468839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.469032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.942 [2024-07-27 01:35:05.469070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.942 qpair failed and we were unable to recover it. 00:27:13.942 [2024-07-27 01:35:05.469291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.469607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.469655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.469876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.470047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.470090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.470296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.470525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.470574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.470771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.471012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.471043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.471260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.471611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.471667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.471863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.472089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.472121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.472315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.472672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.472738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.472966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.473167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.473199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.473397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.473578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.473622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.473778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.474002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.474031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.474239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.474456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.474485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.474673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.474881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.474924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.475149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.475341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.475371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.475568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.475755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.475786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.475974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.476152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.476180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.476384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.476756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.476806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.477031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.477215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.477246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.477458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.477661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.477691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.477878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.478117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.478149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.478340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.478528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.478558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.478745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.478963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.478993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.479165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.479395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.479462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.479675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.479837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.479867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.480074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.480246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.480284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.480505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.480786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.480814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.481011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.481185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.481220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.481422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.481577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.481604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.481753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.481964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.481997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.943 qpair failed and we were unable to recover it. 00:27:13.943 [2024-07-27 01:35:05.482174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.943 [2024-07-27 01:35:05.482325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.482352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.482519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.482701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.482728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.482901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.483096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.483130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.483340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.483523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.483552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.483723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.483899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.483925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.484102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.484275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.484303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.484472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.484656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.484686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.484874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.485091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.485128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.485321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.485580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.485611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.485804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.486022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.486052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.486283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.486441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.486470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.486620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.486765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.486793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.486967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.487166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.487206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.487435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.487636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.487667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.487890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.488111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.488142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.488367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.488562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.488591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.488761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.488948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.488976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.489180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.489334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.489371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.489540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.489743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.489772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.489954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.490159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.490186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.490356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.490585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.490615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.490835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.490994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.491023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.491237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.491433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.491465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.491666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.491857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.491887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.492085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.492310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.492341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.492580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.492757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.492793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.492973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.493163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.493193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.493390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.493584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.493620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.944 qpair failed and we were unable to recover it. 00:27:13.944 [2024-07-27 01:35:05.493816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.493990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.944 [2024-07-27 01:35:05.494018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.494204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.494380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.494409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.494592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.494797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.494824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.495024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.495237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.495265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.495464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.495630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.495660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.495882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.496032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.496070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.496247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.496400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.496429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.496623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.496979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.497019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.497254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.497447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.497478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.497684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.497863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.497890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.498046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.498250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.498285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.498480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.498678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.498705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.498881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.499108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.499150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.499324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.499623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.499681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.499886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.500094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.500126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.500348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.500614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.500663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.500866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.501051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.501090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.501316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.501534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.501578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.501760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.501917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.501944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.502094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.502244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.502273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.502503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.502873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.502936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.503165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.503364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.503407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.503602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.503924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.503975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.504198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.504460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.504513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.945 qpair failed and we were unable to recover it. 00:27:13.945 [2024-07-27 01:35:05.504744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.945 [2024-07-27 01:35:05.504940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.504970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.505159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.505475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.505540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.505730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.505951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.505981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.506192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.506478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.506537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.506736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.506931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.506961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.507153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.507337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.507367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.507562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.507750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.507778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.508013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.508253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.508285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.508486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.508641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.508669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.508846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.509035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.509072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.509257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.509432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.509460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.509681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.509876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.509906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.510198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.510447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.510508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.510711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.510926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.510959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.511158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.511353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.511384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.511622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.511962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.512016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.512224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.512539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.512603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.512805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.513013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.513043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.513252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.513443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.513477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.513687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.513856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.513899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.514097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.514312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.514342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.514563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.514853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.514909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.515131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.515349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.515380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.515599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.515807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.515836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.516021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.516227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.516258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.516445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.516630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.516660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.516858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.517052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.517091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.946 [2024-07-27 01:35:05.517305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.517584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.946 [2024-07-27 01:35:05.517637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.946 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.517864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.518089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.518118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.518337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.518704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.518763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.518948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.519138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.519169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.519362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.519663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.519730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.519961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.520155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.520186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.520415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.520652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.520707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.520936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.521119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.521149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.521367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.521706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.521753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.521937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.522093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.522138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.522358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.522699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.522759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.522990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.523234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.523264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.523441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.523631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.523661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.523880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.524083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.524111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.524288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.524483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.524514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.524739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.524938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.524968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.525127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.525344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.525381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.525565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.525772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.525799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.525975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.526215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.526246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.526444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.526615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.526645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.526868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.527077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.527109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.527276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.527447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.527495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.527733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.527905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.527949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.528129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.528358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.528387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.528589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.528811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.528863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.529090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.529294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.529326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.529518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.529684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.529714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.529884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.530084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.530115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.530275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.530429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.947 [2024-07-27 01:35:05.530460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.947 qpair failed and we were unable to recover it. 00:27:13.947 [2024-07-27 01:35:05.530658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.530924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.530975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.531181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.531338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.531365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.531526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.531733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.531764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.531950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.532138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.532166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.532373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.532527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.532556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.532726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.532926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.532953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.533161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.533337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.533364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.533562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.533896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.533948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.534120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.534310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.534340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.534557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.534748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.534779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.534983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.535168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.535200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.535421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.535674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.535735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.535961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.536132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.536166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.536350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.536548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.536579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.536803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.536963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.537002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.537234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.537412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.537441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.537615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.537907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.537958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.538166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.538364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.538395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.538592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.538781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.538812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.538972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.539139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.539183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.539379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.539589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.539621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.539842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.540039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.540082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.540314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.540593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.540649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.540868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.541054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.541097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.541288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.541463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.541491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.541695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.541852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.541890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.542067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.542279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.542309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.542512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.542709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.542739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.948 qpair failed and we were unable to recover it. 00:27:13.948 [2024-07-27 01:35:05.542917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.948 [2024-07-27 01:35:05.543125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.543154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.543378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.543572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.543603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.543797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.543960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.543990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.544186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.544407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.544439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.544670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.545027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.545100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.545295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.545510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.545537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.545717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.545941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.545972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.546193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.546393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.546423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.546636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.546883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.546914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.547145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.547325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.547358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.547592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.547827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.547854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.548055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.548268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.548300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.548504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.548730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.548760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.548999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.549217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.549249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.549442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.549618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.549645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.549817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.550035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.550070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.550273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.550432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.550463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.550650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.550937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.550996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.551176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.551387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.551419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.551609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.551786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.551816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.552015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.552222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.552250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.552427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.552613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.552643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.552834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.553006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.553041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.553284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.553459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.553487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.949 qpair failed and we were unable to recover it. 00:27:13.949 [2024-07-27 01:35:05.553670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.949 [2024-07-27 01:35:05.553874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.553904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.554099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.554286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.554317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.554505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.554787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.554838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.555036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.555206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.555248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.555479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.555685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.555747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.555930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.556099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.556131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.556356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.556625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.556678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.556879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.557103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.557134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.557304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.557487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.557524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.557693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.557919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.557951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.558187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.558341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.558385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.558622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.558801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.558829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.559004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.559183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.559211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.559389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.559623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.559650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.559918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.560153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.560185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.560423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.560677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.560708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.560927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.561154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.561184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.561402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.561623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.561655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.561912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.562205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.562241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.562441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.562757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.562821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.563021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.563231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.563263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.563438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.563620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.563648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.563882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.564081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.564109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.564275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.564502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.564556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.564796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.565001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.565030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.565236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.565440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.565489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.565694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.565893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.565923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.566157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.566330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.566357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.950 qpair failed and we were unable to recover it. 00:27:13.950 [2024-07-27 01:35:05.566560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.950 [2024-07-27 01:35:05.566766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.566808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.567018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.567252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.567283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.567478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.567762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.567828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.568034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.568220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.568248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.568422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.568594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.568625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.568824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.568996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.569026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.569259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.569497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.569548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.569739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.569908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.569938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.570140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.570289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.570316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.570518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.570834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.570890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.571087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.571277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.571308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.571511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.571743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.571798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.572006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.572223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.572253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.572408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.572640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.572667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.572850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.573023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.573051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.573233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.573401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.573428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.573629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.573953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.574014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.574205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.574402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.574431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.574627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.574820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.574849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.575012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.575204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.575233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.575437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.575664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.575723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.575943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.576109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.576140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.576355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.576644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.576699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.576927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.577120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.577188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.577385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.577604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.577630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.577782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.577982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.578009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.578153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.578327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.578370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.578563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.578788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.578818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.951 qpair failed and we were unable to recover it. 00:27:13.951 [2024-07-27 01:35:05.578969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.951 [2024-07-27 01:35:05.579198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.579226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.579419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.579646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.579676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.579878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.580073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.580103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.580286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.580510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.580554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.580729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.580929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.580956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.581132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.581367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.581410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.581616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.581904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.581930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.582099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.582318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.582348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.582580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.582740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.582766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.582971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.583180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.583212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.583436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.583714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.583764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.583985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.584205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.584236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.584462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.584670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.584713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.584929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.585159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.585189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.585387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.585646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.585701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.585929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.586090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.586122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.586365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.586542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.586569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.586760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.586953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.586982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.587141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.587333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.587363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.587526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.587777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.587835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.588066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.588239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.588266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.588448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.588664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.588694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.588914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.589188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.589219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.589423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.589567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.589595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.589803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.589995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.590022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.590227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.590423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.590453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.590609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.590831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.590887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.591137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.591313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.591340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.952 qpair failed and we were unable to recover it. 00:27:13.952 [2024-07-27 01:35:05.591536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.591682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.952 [2024-07-27 01:35:05.591710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.591861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.592031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.592071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.592273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.592490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.592518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.592719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.592892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.592919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.593090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.593311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.593340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.593538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.593710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.593740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.593942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.594118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.594146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.594349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.594686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.594741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.594940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.595259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.595314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.595506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.595806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.595878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.596106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.596280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.596307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.596530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.596704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.596733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.596910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.597158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.597190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.597361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.597554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.597585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.597772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.597961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.597990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.598225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.598421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.598483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.598653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.598827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.598855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.599085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.599376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.599431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.599640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.599918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.599976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.600195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.600408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.600459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.600682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.600888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.600918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.601146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.601368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.601398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.601590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.601863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.601910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.602129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.602319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.602349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.602621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.602948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.603011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.603253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.603511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.603563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.603779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.603990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.604016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.604212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.604389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.604419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.953 [2024-07-27 01:35:05.604612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.604871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.953 [2024-07-27 01:35:05.604923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.953 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.605123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.605325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.605355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.605576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.605886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.605940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.606122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.606291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.606320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.606511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.606774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.606826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.607044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.607280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.607310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.607515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.607808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.607869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.608072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.608273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.608303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.608498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.608651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.608677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.608869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.609089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.609120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.609337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.609625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.609691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.609908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.610129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.610158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.610365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.610549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.610578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.610748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.610923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.610965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.611168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.611366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.611395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.611592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.611836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.611889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.612124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.612387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.612447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.612622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.612815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.612881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.613099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.613323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.613382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.613556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.613761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.613789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.613967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.614147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.614174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.614370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.614654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.614705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.614896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.615093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.615124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.615342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.615566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.615595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.954 [2024-07-27 01:35:05.615751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.615977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.954 [2024-07-27 01:35:05.616007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.954 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.616203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.616387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.616417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.616589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.616795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.616822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.617014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.617255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.617286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.617487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.617693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.617752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.617973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.618169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.618200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.618398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.618637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.618689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.618918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.619141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.619173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.619378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.619572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.619601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.619791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.619947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.619977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.620175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.620358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.620389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.620592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.620905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.620958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.621147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.621309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.621341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.621536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.621713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.621745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.621934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.622107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.622136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.622335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.622680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.622741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.622935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.623215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.623271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.623483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.623719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.623751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.623989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.624165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.624194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.624394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.624591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.624621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.624839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.625030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.625066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.625255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.625473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.625499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.625696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.625859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.625888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.626075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.626277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.626315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.626473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.626677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.626705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.626929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.627114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.627145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.627313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.627544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.627603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.627800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.628024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.628051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.955 qpair failed and we were unable to recover it. 00:27:13.955 [2024-07-27 01:35:05.628272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.628446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.955 [2024-07-27 01:35:05.628476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.628677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.628848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.628874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.629076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.629303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.629335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.629529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.629695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.629724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.629921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.630100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.630127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.630308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.630503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.630537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.630698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.630896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.630931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.631177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.631414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.631445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.631691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.631883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.631914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.632131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.632343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.632371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.632571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.632915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.632973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.633173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.633443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.633493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.633720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.633890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.633919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.634132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.634334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.634364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.634517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.634743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.634770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.634949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.635099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.635150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.635355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.635565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.635598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.635756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.635950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.635982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.636218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.636411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.636450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.636671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.636850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.636877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.637100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.637265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.637295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.637516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.637698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.637728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.637900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.638102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.638131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.638322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.638516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.638547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.638769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.638995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.639026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.639243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.639441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.639468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.639710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.639903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.639932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.640134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.640333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.640362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.956 qpair failed and we were unable to recover it. 00:27:13.956 [2024-07-27 01:35:05.640537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.640727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.956 [2024-07-27 01:35:05.640757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.640977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.641188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.641217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.641400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.641585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.641615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.641810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.641976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.642006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.642241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.642404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.642433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.642654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.642856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.642913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.643145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.643345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.643386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.643589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.643766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.643796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.643983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.644130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.644158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.644358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.644671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.644723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.644918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.645096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.645131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.645316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.645516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.645544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.645749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.645914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.645942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.646169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.646362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.646391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.646589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.646797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.646852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.647041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.647246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.647279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.647483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.647704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.647733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.647902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.648092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.648124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.648333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.648621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.648672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.648865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.649066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.649094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.649295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.649535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.649561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.649774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.649975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.650005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.650236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.650403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.650450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.650684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.650838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.650873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.651090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.651290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.651329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.651550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.651850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.651908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.652103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.652270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.652301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.652505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.652684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.652716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.957 [2024-07-27 01:35:05.652926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.653145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.957 [2024-07-27 01:35:05.653177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.957 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.653345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.653568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.653640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.653841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.654005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.654035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.654247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.654406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.654439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.654645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.654941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.655000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.655177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.655322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.655364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.655551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.655744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.655774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.656000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.656188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.656217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.656396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.656584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.656613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.656812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.657007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.657049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.657274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.657594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.657649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.657853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.657999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.658026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.658215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.658402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.658433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.658661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.658820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.658851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.659049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.659216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.659245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.659440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.659651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.659678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.659834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.660000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.660026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.660208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.660414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.660443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.660647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.660832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.660862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.661098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.661271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.661305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.661486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.661803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.661865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.662093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.662270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.662299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.662496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.662698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.662726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.662928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.663155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.663187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.663405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.663649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.663689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.663906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.664184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.664215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.664449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.664671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.664703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.664882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.665081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.665112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.665308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.665535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.958 [2024-07-27 01:35:05.665562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.958 qpair failed and we were unable to recover it. 00:27:13.958 [2024-07-27 01:35:05.665762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.665983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.666023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.666260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.666467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.666494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.666705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.666907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.666936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.667141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.667344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.667374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.667561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.667737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.667765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.667941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.668131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.668162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.668364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.668690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.668740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.668931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.669117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.669148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.669341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.669564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.669595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.669785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.669962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.669989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.670138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.670302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.670346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.670548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.670712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.670741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.670972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.671187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.671216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.671409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.671719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.671773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.671978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.672178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.672209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.672378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.672568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.672633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.672794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.672981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.673011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.673218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.673415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.673446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.673668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.673885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.673915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.674104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.674278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.674307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.674504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.674891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.674945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.675155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.675384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.675414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.675595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.675906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.675959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.676160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.676436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.676487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.959 [2024-07-27 01:35:05.676706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.676930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.959 [2024-07-27 01:35:05.676959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.959 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-27 01:35:05.677150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.677340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.677372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-27 01:35:05.677589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.677850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.677910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-27 01:35:05.678120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.678324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.678356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-27 01:35:05.678552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.678722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.678753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-27 01:35:05.678951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.679145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.679176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-27 01:35:05.679344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.679526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.679555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-27 01:35:05.679753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.679962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.679989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:13.960 [2024-07-27 01:35:05.680166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.680356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.960 [2024-07-27 01:35:05.680387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:13.960 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.680556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.680706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.680735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.680937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.681184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.681214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.681408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.681729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.681784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.681985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.682172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.682200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.682397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.682618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.682647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.682846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.683081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.683111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.683342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.683642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.683696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.683884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.684075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.684106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.684326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.684653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.684712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.684909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.685128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.685158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.685376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.685693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.685746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.685939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.686131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.686161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.686360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.686532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.686559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.686713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.686904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.686934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.687100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.687291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.687321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.687500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.687732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.687789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.687978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.688152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.688180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.688364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.688640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.688693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.688938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.689150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.689177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.689362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.689560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.689587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.689759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.689943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.689973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.690149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.690337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.690368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.690586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.690786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.690813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.228 [2024-07-27 01:35:05.690987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.691182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-07-27 01:35:05.691213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.228 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.691426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.691576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.691603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.691822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.692040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.692085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.692293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.692511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.692541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.692740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.692932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.692961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.693129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.693324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.693363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.693584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.693897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.693948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.694155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.694355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.694382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.694586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.694864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.694913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.695132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.695349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.695378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.695575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.695770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.695800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.695952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.696118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.696149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.696339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.696592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.696643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.696868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.697095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.697125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.697310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.697506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.697533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.697696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.697889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.697924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.698096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.698278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.698305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.698504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.698890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.698953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.699147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.699339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.699369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.699566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.699763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.699850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.700049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.700223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.700254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.700472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.700755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.700813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.701015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.701188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.701218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.701438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.701679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.701737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.701966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.702168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.702195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.702402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.702638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.702695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.702889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.703048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.703086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.703281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.703605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.703657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.229 qpair failed and we were unable to recover it. 00:27:14.229 [2024-07-27 01:35:05.703886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.229 [2024-07-27 01:35:05.704051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.704091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.704289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.704490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.704521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.704683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.704893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.704951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.705159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.705412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.705471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.705671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.705819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.705846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.706021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.706204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.706232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.706405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.706714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.706769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.706956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.707180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.707216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.707430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.707763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.707820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.708016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.708185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.708215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.708410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.708698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.708752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.708950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.709140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.709169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.709394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.709667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.709716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.709905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.710123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.710153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.710378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.710605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.710635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.710808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.710961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.710988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.711214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.711508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.711561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.711757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.712007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.712036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.712255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.712424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.712454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.712673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.712997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.713057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.713283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.713496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.713523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.713697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.713933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.713963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.714160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.714330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.714356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.714610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.714831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.714881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.715084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.715379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.715431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.715651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.715886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.715950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.716175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.716397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.716427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.230 [2024-07-27 01:35:05.716622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.716845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.230 [2024-07-27 01:35:05.716874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.230 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.717076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.717290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.717320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.717516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.720270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.720316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.720497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.720734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.720794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.720980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.721174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.721206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.721399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.721579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.721622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.721832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.722050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.722089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.722273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.722488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.722540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.722731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.723053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.723135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.723412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.723641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.723668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.723901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.724075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.724106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.724322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.724720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.724772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.725114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.725308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.725335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.725575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.725795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.725824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.726023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.726251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.726281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.726486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.726722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.726774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.727068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.727335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.727365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.727560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.727767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.727793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.727998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.728186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.728216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.728445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.728727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.728756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.729103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.729326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.729356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.729567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.729766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.729793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.729997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.730210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.730237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.730431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.730644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.730670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.730815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.730994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.731037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.731249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.731521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.731573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.731804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.731992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.732021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.732226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.732428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.732457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.231 qpair failed and we were unable to recover it. 00:27:14.231 [2024-07-27 01:35:05.732677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.732894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.231 [2024-07-27 01:35:05.732923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.733120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.733325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.733355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.733550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.733796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.733850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.734052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.734230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.734260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.734480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.734764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.734821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.735038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.735261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.735291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.735491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.735717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.735744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.735954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.736135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.736168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.736381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.736647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.736704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.736898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.737113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.737143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.737382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.737637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.737696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.738021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.738323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.738368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.738561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.738881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.738933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.739134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.739344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.739371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.739575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.739765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.739794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.739979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.740169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.740199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.740411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.740636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.740662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.740890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.741083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.741114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.741280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.741467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.741496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.741689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.741910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.741937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.742140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.742284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.742312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.742506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.742727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.742770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.743002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.743207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.743235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.232 [2024-07-27 01:35:05.743442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.743861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.232 [2024-07-27 01:35:05.743917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.232 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.744141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.744311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.744353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.744560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.744749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.744779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.744970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.745201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.745228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.745451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.745726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.745779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.746074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.746396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.746448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.746717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.746943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.746972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.747188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.747378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.747408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.747566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.747758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.747787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.748051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.748303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.748332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.748553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.748815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.748846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.749016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.749220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.749251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.749446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.749870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.749923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.750118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.750309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.750335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.750593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.750895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.750956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.751219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.751428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.751492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.751666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.751885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.751913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.752092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.752276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.752305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.752614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.752818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.752845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.753006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.753195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.753226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.753439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.753723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.753782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.753975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.754178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.754209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.754401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.754677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.754703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.754914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.755077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.755122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.755323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.755619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.755674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.755892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.756052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.756088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.756266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.756510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.756551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.756750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.757044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.757109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.233 [2024-07-27 01:35:05.757281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.757504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.233 [2024-07-27 01:35:05.757557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.233 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.757750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.757968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.757998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.758201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.758404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.758434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.758661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.758847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.758873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.759077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.759260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.759289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.759491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.759719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.759746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.759943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.760110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.760140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.760348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.760564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.760592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.760776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.760979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.761008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.761197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.761396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.761437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.761671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.762004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.762050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.762321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.762636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.762704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.762928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.763131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.763159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.763351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.763544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.763574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.763772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.763933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.763962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.764149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.764359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.764427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.764624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.764840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.764869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.765067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.765263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.765294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.765489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.765837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.765904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.766188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.766404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.766464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.766691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.766958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.767011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.767225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.767484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.767536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.767765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.767931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.767967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.768218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.768511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.768572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.768794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.768992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.769022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.769227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.769412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.769442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.769672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.769980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.770051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.770263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.770549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.770601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.770792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.770995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.234 [2024-07-27 01:35:05.771022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.234 qpair failed and we were unable to recover it. 00:27:14.234 [2024-07-27 01:35:05.771238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.771409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.771438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.771666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.772014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.772085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.772295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.772508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.772538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.772745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.772965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.772999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.773206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.773431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.773482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.773703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.773902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.773929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.774112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.774334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.774363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.774560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.774834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.774861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.775069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.775240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.775270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.775459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.775719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.775770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.775941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.776141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.776168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.776364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.776534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.776563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.776731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.776953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.776982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.777207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.777380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.777412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.777571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.777747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.777776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.778002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.778205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.778233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.778441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.778716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.778765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.778975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.779144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.779172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.779346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.779566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.779596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.779757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.779909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.779939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.780127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.780291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.780321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.780489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.780710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.780782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.780983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.781158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.781202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.781376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.781598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.781629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.781840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.782053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.782090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.782281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.782495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.782526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.782752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.782919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.782949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.235 [2024-07-27 01:35:05.783155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.783327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.235 [2024-07-27 01:35:05.783354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.235 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.783541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.783813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.783865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.784085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.784251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.784280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.784479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.784624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.784670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.784868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.785067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.785098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.785308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.785496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.785564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.785760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.786263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.786296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.786540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.786812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.786842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.787072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.787277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.787304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.787505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.787683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.787715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.787937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.788134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.788164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.788363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.788554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.788584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.788781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.788999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.789029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.789271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.789538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.789592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.789805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.790027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.790056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.790238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.790416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.790456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.790678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.790919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.790947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.791128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.791281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.791318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.791565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.791805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.791863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.792088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.792248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.792276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.792504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.792771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.792810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.793017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.793211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.793241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.793469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.793801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.793850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.794039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.794229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.794260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.794461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.794634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.794682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.794854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.795103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.795133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.795331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.795581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.795641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.795908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.796137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.796167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.236 [2024-07-27 01:35:05.796357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.796566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.236 [2024-07-27 01:35:05.796613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.236 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.796809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.797020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.797046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.797246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.797425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.797472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.797671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.797836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.797866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.798055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.798277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.798318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.798479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.798662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.798702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.798862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.799082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.799114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.799309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.799504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.799530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.799742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.799918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.799946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.800146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.800330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.800357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.800540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.800714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.800741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.800949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.801120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.801147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.801291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.801468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.801495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.801722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.801909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.801939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.802107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.802308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.802352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.802573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.802774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.802803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.802976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.803174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.803200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.803347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.803526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.803552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.803830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.804088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.804141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.804320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.804549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.804576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.804768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.804939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.804968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.805166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.805393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.805422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.805658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.805886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.805916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.806131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.806277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.806304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.806484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.806658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.806685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.237 qpair failed and we were unable to recover it. 00:27:14.237 [2024-07-27 01:35:05.806885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.237 [2024-07-27 01:35:05.807078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.807134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.807323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.807560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.807612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.807944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.808186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.808212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.808358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.808510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.808538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.808736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.808935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.808966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.809135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.809310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.809341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.809511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.809685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.809715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.809907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.810097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.810147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.810323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.810543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.810573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.810736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.810953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.810982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.811162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.811358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.811402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.811595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.811840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.811869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.812079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.812296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.812323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.812586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.812816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.812845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.813020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.813207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.813235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.813404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.813693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.813746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.813944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.814142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.814169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.814373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.814622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.814651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.814875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.815116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.815142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.815292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.815514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.815581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.815900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.816177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.816203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.816401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.816637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.816686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.816907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.817172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.817198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.817369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.817676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.817725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.817957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.818153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.818181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.818382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.818580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.818610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.818856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.819050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.819091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.819288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.819507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.819534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.238 qpair failed and we were unable to recover it. 00:27:14.238 [2024-07-27 01:35:05.819712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.238 [2024-07-27 01:35:05.819926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.819955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.820181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.820371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.820401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.820622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.820818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.820847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.821050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.821280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.821308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.821519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.821668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.821705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.821915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.822117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.822145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.822370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.822562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.822592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.822815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.823013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.823042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.823257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.823419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.823446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.823650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.823844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.823871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.824030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.824203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.824230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.824383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.824548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.824574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.824770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.824955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.824984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.825179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.825334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.825382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.825575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.825921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.825979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.826155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.826382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.826408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.826598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.826912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.826963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.827180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.827422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.827470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.827659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.827984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.828036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.828250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.828438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.828467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.828682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.828872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.828903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.829160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.829356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.829387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.829582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.829908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.829959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.830155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.830352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.830382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.830555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.830755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.830791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.830995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.831159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.831190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.831377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.832263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.832307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.832541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.832766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.239 [2024-07-27 01:35:05.832797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.239 qpair failed and we were unable to recover it. 00:27:14.239 [2024-07-27 01:35:05.833005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.833162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.833188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.833362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.833568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.833599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.833794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.834000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.834028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.834211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.834417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.834464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.834663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.834863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.834894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.835084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.835258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.835288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.835482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.835713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.835767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.835982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.836149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.836179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.836399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.836590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.836617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.836787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.836960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.837002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.837235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.837384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.837415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.837634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.837843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.837872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.838042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.838234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.838271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.838468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.839450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.839495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.839680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.839854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.839886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.840110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.840309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.840340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.840558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.840737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.840775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.840977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.841178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.841211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.841417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.841585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.841617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.841808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.842029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.842078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.842307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.842504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.842535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.842778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.842984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.843010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.843160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.843331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.843377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.843615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.843808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.843839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.844035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.844244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.844274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.844489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.844831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.844895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.845079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.845271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.845302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.845492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.845724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.845776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.240 qpair failed and we were unable to recover it. 00:27:14.240 [2024-07-27 01:35:05.845965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.240 [2024-07-27 01:35:05.846169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.846201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.846422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.846619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.846650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.846870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.847861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.847896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.848130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.848281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.848310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.848484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.848682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.848711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.848941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.849146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.849177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.849404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.849627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.849665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.849862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.850081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.850113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.850295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.850482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.850525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.850683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.850873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.850905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.851149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.851318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.851352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.851572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.851777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.851831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.852045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.852261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.852301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.852525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.852707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.852735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.852918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.853208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.853238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.853456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.853671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.853717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.853912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.854102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.854134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.854310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.854552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.854581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.854787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.855012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.855039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.855219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.855409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.855436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.855706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.855889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.855947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.856139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.856325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.856354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.856656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.856848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.856877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.857072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.857256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.857284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.857474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.857668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.857711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.857913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.858102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.858144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.858362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.858514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.858541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.858801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.859030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.859070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.241 [2024-07-27 01:35:05.859278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.859469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.241 [2024-07-27 01:35:05.859501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.241 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.859745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.859896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.859923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.860114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.860347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.860378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.860613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.860941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.860991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.861251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.861470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.861502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.861734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.861935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.861962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.862200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.862386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.862415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.862591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.862828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.862875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.863109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.863293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.863334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.863529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.863731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.863759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.863912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.864140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.864174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.864367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.864623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.864670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.864943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.865185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.865215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.865435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.865643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.865692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.865950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.866181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.866223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.866412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.866614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.866640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.866814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.866990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.867018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.867183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.867349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.867378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.867575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.867893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.867947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.868157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.868311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.868365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.868525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.868678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.868714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.868896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.869076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.869133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.869328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.869529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.869579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.869854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.870020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.870051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.870264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.870472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.242 [2024-07-27 01:35:05.870502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.242 qpair failed and we were unable to recover it. 00:27:14.242 [2024-07-27 01:35:05.870725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.870891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.870919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.871073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.871269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.871297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.871509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.871713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.871739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.871912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.872098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.872155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.872308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.872481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.872525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.872729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.872920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.872948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.873108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.873277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.873305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.873508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.873718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.873767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.873976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.874166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.874195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.874369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.874523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.874551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.874700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.874861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.874905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.875097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.875298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.875325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.875504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.875643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.875670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.875870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.876070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.876130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.876286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.876460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.876491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.876703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.876889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.876917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.877154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.877302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.877331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.877547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.877739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.877777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.877957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.878133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.878160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.878312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.878559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.878606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.878780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.878994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.879023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.879204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.879349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.879394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.879583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.879792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.879851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.880027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.880217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.880254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.880421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.880595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.880639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.880828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.881051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.881117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.881291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.881473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.881504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.881717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.881898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.881926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.243 qpair failed and we were unable to recover it. 00:27:14.243 [2024-07-27 01:35:05.882135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.243 [2024-07-27 01:35:05.882282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.882321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.882496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.882689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.882720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.882919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.883087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.883122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.883296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.883507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.883534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.883707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.883886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.883915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.884105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.884280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.884310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.884517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.884762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.884808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.884998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.885208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.885236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.885434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.885586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.885614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.885767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.885939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.885965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.886151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.886333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.886360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.886507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.886684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.886711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.886849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.887036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.887073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.887271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.887478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.887525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.887694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.887908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.887938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.888099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.888282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.888315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.888536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.888762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.888813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.889021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.889177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.889204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.889405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.889715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.889768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.889943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.890163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.890190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.890347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.890527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.890572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.890755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.890957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.890987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.891187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.891329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.891370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.891594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.891757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.891796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.891947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.892157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.892184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.892361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.892553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.892582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.892768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.892989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.893019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.893218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.893367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.893394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.244 qpair failed and we were unable to recover it. 00:27:14.244 [2024-07-27 01:35:05.893576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.244 [2024-07-27 01:35:05.893765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.893794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.893977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.894183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.894211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.894375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.894604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.894647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.894844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.895042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.895092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.895269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.895480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.895509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.895741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.895941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.895970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.896145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.896323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.896349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.896610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.896830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.896860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.897065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.897275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.897301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.897510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.897672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.897701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.897871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.898069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.898122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.898277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.898491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.898523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.898736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.898935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.898964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.899135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.899291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.899328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.899532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.899798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.899845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.900016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.900227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.900253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.900432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.900612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.900639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.900815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.901022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.901051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.901234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.901407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.901437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.901659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.901838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.901864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.902086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.902294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.902332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.902486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.902712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.902741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.902950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.903148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.903178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.903353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.903534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.903577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.903779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.903955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.903982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.904189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.904443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.904489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.904685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.904908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.904948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.905147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.905358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.905384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.245 qpair failed and we were unable to recover it. 00:27:14.245 [2024-07-27 01:35:05.905557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.245 [2024-07-27 01:35:05.905730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.905757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.905927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.906121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.906148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.906293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.906525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.906554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.906742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.906935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.906965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.907164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.907372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.907399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.907618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.907765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.907791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.907964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.908138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.908165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.908358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.908530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.908580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.908785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.908930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.908972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.909178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.909346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.909373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.909575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.909802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.909852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.910075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.910254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.910284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.910487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.910676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.910706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.910919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.911087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.911127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.911326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.911538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.911593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.911769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.911947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.911974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.912126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.912297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.912323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.912530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.912774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.912823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.913003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.913212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.913241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.913436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.913630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.913660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.913878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.914107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.914143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.914297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.914506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.914553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.914769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.914985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.915015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.915248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.915393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.915420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.246 [2024-07-27 01:35:05.915599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.915769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.246 [2024-07-27 01:35:05.915817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.246 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.916016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.916208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.916238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.916405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.916565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.916595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.916786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.917000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.917031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.917246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.917454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.917484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.917701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.917892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.917922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.918128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.918276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.918319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.918498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.918724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.918751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.918916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.919185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.919214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.919393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.919589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.919639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.919847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.920042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.920084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.920269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.920502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.920529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.920707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.920880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.920908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.921137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.921307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.921334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.921531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.921751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.921798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.921961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.922122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.922168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.922335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.922490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.922519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.922794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.922959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.922989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.923171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.923319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.923357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.923535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.923769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.923820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.923988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.924163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.924194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.924341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.924535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.924583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.924813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.924993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.925022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.925234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.925391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.925429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.925621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.925857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.925906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.926082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.926246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.926273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.926485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.926706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.926753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.926930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.927117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.927144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.247 [2024-07-27 01:35:05.927311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.927525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.247 [2024-07-27 01:35:05.927573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.247 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.927746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.927971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.928000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.928183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.928342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.928368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.928580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.928798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.928845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.928998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.929181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.929208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.929358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.929555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.929585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.929800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.930014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.930043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.930237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.930383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.930410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.930632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.930821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.930851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.931040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.931221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.931248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.931422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.931715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.931763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.931954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.932116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.932141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.932314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.932544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.932593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.932859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.933103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.933133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.933310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.933501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.933531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.933729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.933904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.933931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.934129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.934329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.934364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.934595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.934771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.934798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.935008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.935179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.935205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.935376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.935575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.935603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.935752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.935976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.936005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.936188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.936378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.936425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.936673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.936908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.936935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.937124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.937265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.937290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.937484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.937794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.937846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.938049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.938220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.938246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.938421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.938571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.938615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.938845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.939047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.939083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.939256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.939417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.248 [2024-07-27 01:35:05.939444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.248 qpair failed and we were unable to recover it. 00:27:14.248 [2024-07-27 01:35:05.939652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.939827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.939874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.940078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.940270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.940297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.940505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.940719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.940767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.940966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.941195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.941224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.941443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.941672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.941722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.941911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.942203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.942232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.942475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.942679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.942732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.942951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.943123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.943152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.943341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.943615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.943663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.943832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.944052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.944088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.944267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.944430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.944465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.944703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.944987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.945036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.945247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.945443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.945473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.945668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.945938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.945985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.946187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.946391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.946419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.946591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.946789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.946837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.947079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.947262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.947289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.947490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.947656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.947683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.947882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.948089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.948125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.948318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.948531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.948577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.948802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.949012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.949041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.949252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.949452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.949483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.949648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.949936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.949991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.950187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.950467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.950519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.950750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.950912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.950941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.951146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.951345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.951371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.951596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.951791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.951819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.952020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.952235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.249 [2024-07-27 01:35:05.952265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.249 qpair failed and we were unable to recover it. 00:27:14.249 [2024-07-27 01:35:05.952487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.952701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.952730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.952923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.953141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.953170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.953345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.953502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.953529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.953676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.953855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.953882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.954115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.954287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.954317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.954512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.954681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.954732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.954934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.955155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.955184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.955385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.955648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.955697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.955916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.956095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.956130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.956328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.956676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.956735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.956952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.957177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.957203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.957354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.957504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.957550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.957771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.957964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.957994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.958191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.958410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.958438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.958612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.958844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.958891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.959096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.959258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.959285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.959523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.959715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.959743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.959929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.960127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.960156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.960354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.960593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.960647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.960867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.961030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.961066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.961269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.961456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.961483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.961708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.961973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.962022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.962252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.962472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.962502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.962695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.962862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.962927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.963109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.963329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.963364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.963577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.963749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.963779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.964004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.964188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.964215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.964382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.964537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.964567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.250 qpair failed and we were unable to recover it. 00:27:14.250 [2024-07-27 01:35:05.964756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.250 [2024-07-27 01:35:05.964942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.964972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.965185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.965348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.965378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.965531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.965777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.965835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.966022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.966204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.966231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.966380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.966575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.966605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.966798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.966982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.967012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.967228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.967441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.967491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.967715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.967910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.967940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.968155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.968351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.968382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.968608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.968955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.969007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.969204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.969361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.969389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.969568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.969741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.969768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.969958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.970175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.970205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.970379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.970523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.970567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.970759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.970973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.971003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.971173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.971394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.971424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.971628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.971825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.971874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.972043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.972275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.972305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.972479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.972660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.972687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.972864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.973072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.973100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.973315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.973519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.973582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.973789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.973963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.973990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.974177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.974375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.974405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.974619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.974817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.974845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.975022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.975230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.251 [2024-07-27 01:35:05.975261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.251 qpair failed and we were unable to recover it. 00:27:14.251 [2024-07-27 01:35:05.975436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.252 [2024-07-27 01:35:05.975644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.252 [2024-07-27 01:35:05.975671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.252 qpair failed and we were unable to recover it. 00:27:14.252 [2024-07-27 01:35:05.975878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.252 [2024-07-27 01:35:05.976036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.252 [2024-07-27 01:35:05.976083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.252 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.976276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.976447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.976480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.976681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.976879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.976918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.977078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.977265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.977295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.977484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.977673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.977704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.977904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.978077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.978105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.978294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.978652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.978703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.978899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.979121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.979151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.979377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.979549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.979577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.979743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.979919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.979947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.980171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.980374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.980401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.980547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.980736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.980790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.981018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.981242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.981274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.981512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.981681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.981708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.981884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.982075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.982105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.982299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.982572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.982619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.982834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.983032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.983070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.983254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.983427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.983457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.983672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.983972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.984028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.984220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.984435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.984483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.984675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.984880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.984927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.985115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.985308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.985337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.985554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.985775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.985809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-07-27 01:35:05.986034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.986205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-07-27 01:35:05.986238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.986432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.986654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.986701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.986880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.987093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.987123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.987345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.987547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.987582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.987825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.987991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.988021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.988247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.988460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.988511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.988738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.988913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.988942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.989163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.989380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.989408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.989632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.989825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.989855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.990054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.990255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.990290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.990515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.990726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.990753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.990909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.991100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.991130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.991307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.991478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.991521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.991718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.991886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.991917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.992111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.992309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.992339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.992564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.992743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.992770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.992946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.993126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.993154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.993353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.993526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.993553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.993722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.993897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.993926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.994121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.994369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.994425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.994634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.994808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.994838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.995044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.995211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.995239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.995444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.995660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.995690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.995845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.996056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.996094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.996288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.996433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.996461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.996683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.996899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.996928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.997088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.997287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.997314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.997520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.997823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.997882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-07-27 01:35:05.998079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-07-27 01:35:05.998232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:05.998260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:05.998481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:05.998685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:05.998732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:05.998913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:05.999112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:05.999140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:05.999369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:05.999665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:05.999719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:05.999916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.000145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.000193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.000372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.000544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.000571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.000777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.000924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.000967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.001200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.001383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.001417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.001657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.001853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.001883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.002075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.002300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.002328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.002564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.002754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.002781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.002958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.003130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.003158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.003365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.003582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.003611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.003780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.003980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.004007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.004185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.004352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.004383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.004604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.004820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.004867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.005096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.005263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.005291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.005467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.005691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.005721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.005951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.006173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.006203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.006376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.006608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.006665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.006862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.007050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.007087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.007293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.007488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.007548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.007750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.007909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.007940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.008152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.008329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.008356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-07-27 01:35:06.008522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.008755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-07-27 01:35:06.008785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.008978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.009176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.009207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.009396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.009572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.009600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.009793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.009991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.010019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.010217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.010415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.010441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.010642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.010890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.010919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.011101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.011275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.011303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.011494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.011714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.011745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.011975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.012175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.012206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.012424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.012637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.012685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.012857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.013053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.013113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.013341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.013547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.013594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.013760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.013954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.013984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.014176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.014341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.014371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.014594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.014831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.014880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.015107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.015302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.015333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.015534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.015711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.015738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.015938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.016109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.016138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.016365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.016681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.016735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.016957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.017190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.017220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.017417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.017618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.017645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.017853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.018023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.018075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.018300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.018584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.018642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.018865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.019071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.019113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.019316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.019518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.019545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.019731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.019912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.019955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.020157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.020339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.020384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.020606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.020862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.020914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-07-27 01:35:06.021146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.021329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-07-27 01:35:06.021359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.021564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.021727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.021757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.021970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.022185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.022215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.022438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.022694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.022724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.022924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.023122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.023152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.023373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.023584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.023632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.023827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.024021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.024050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.024256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.024554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.024608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.024833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.025027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.025078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.025281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.025556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.025608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.025845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.026046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.026086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.026306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.026456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.026483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.026633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.026886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.026934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.027148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.027338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.027367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.027564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.027758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.027788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.027977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.028151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.028182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.028371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.028557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.028587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.028748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.028923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.028953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.029144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.029303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.029334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.029533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.029727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.029757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.029921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.030089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.030120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.030318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.030516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.030544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.030730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.030876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.030903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.031108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.031279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.031308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.031501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.031680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.031707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-07-27 01:35:06.031857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-07-27 01:35:06.032022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.032054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.032293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.032516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.032546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.032712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.032939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.032969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.033148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.033324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.033368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.033558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.033769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.033816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.034018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.034210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.034238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.034413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.034606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.034653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.034839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.034998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.035027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.035212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.035371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.035398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.035609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.035786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.035815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.036003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.036210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.036238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.036411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.036584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.036611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.036786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.036981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.037011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.037189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.037336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.037363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.037555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.037808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.037856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.038052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.038233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.038264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.038458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.038655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.038683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.038844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.039074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.039105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.039283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.039506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.039553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.039753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.039957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.039984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.040150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.040311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.040360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.040561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.040751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.040839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.041032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.041221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.041249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.041408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.041658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.041702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.041880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.042054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.042091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.042258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.042421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.042450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-07-27 01:35:06.042668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-07-27 01:35:06.042856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.042902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.043094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.043264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.043293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.043465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.043695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.043744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.043936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.044154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.044182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.044332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.044528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.044554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.044768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.044965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.044991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.045171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.045343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.045373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.045556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.045699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.045727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.045905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.046069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.046096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.046276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.046474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.046505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.046683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.046868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.046898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.047093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.047291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.047321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.047518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.047734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.047763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.047976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.048150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.048177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.048324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.048544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.048595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.048777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.048978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.049005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.049184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.049363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.049393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.049556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.049745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.049773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.049957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.050171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.050199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.050355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.050504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.050536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.050756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.050938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.050968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.051172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.051314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.051341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.051513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.051695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.051721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.051929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.052116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.052147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.052341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.052560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.052608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.052804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.052984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.053011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.053197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.053349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.053377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.053600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.053822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.053848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-07-27 01:35:06.054047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.054231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-07-27 01:35:06.054258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.054435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.054627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.054664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.054882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.055091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.055122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.055290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.055429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.055456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.055665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.055855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.055887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.056047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.056227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.056257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.056425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.056644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.056693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.056911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.057111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.057142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.057338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.057506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.057553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.057747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.057893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.057920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.058121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.058275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.058302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.058499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.058723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.058753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.058932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.059130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.059161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.059347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.059559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.059607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.059806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.060001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.060031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.060214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.060357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.060400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.060591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.060778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.060825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.061025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.061185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.061213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.061369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.061606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.061640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.061874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.062076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.062121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.062275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.062456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.062484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.062695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.062895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.062926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.063108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.063247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.063274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.063504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.063709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.063756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.063978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.064135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.064163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.064332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.064523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.064553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.064777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.064996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.065025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.065205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.065385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.065412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-07-27 01:35:06.065634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-07-27 01:35:06.065850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.065877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.066048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.066212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.066240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.066411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.066604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.066650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.066854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.067031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.067073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.067236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.067440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.067470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.067687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.067834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.067861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.068082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.068275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.068305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.068526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.068765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.068812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.068983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.069162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.069189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.069383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.069644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.069691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.069912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.070157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.070184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.070371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.070544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.070570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.070796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.071013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.071042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.071252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.071468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.071497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.071699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.071942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.071990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.072198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.072377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.072438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.072673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.072886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.072921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.073110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.073287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.073314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.073525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.073698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.073725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.073948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.074156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.074186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.074384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.074553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.074581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.074769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.074991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.075020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.075230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.075437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.075471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.075697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.075865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.075895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.076067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.076285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.076315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.076539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.076803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.076849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.077075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.077309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.077336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.077564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.077777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.077824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-07-27 01:35:06.078033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.078263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-07-27 01:35:06.078291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.078466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.078625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.078657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.078859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.079082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.079112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.079306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.079551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.079599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.079819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.079988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.080017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.080219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.080443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.080473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.080711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.080903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.080951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.081150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.081321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.081348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.081501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.081783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.081840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.082065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.082241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.082268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.082443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.082638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.082669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.082860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.083092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.083120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.083298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.083559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.083607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.083834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.084023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.084052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.084260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.084454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.084484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.084684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.084861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.084888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.085134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.085313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.085356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.085574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.085814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.085863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.086076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.086252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.086278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.086460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.086630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.086661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.086854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.087029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.087068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.087333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.087533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.087575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.087799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.087993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.088023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.088227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.088375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.088403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.088554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.088771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.088820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-07-27 01:35:06.089051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-07-27 01:35:06.089203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.089231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.089439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.089677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.089704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.089901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.090159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.090251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.090451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.090655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.090682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.090837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.090987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.091014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.091234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.091554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.091605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.091832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.092023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.092053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.092258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.092458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.092486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.092686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.092878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.092908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.093107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.093329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.093359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.093554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.093729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.093756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.093960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.094276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.094331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.094561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.094823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.094877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.095090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.095364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.095417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.095641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.095896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.095926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.096121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.096278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.096308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.096503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.096696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.096743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.096964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.097128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.097159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.097378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.097709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.097762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.097957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.098139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.098170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.098369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.098658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.098719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.098939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.099149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.099178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.099371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.099591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.099621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.099841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.100072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.100102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.100300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.100520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.100550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.100787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.100948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.100978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.101179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.101499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.101557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.101750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.101965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.101995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-07-27 01:35:06.102217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-07-27 01:35:06.102433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.102488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.102715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.102919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.102946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.103139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.103336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.103378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.103584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.103911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.103977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.104179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.104372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.104455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.104655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.104839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.104866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.105035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.105241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.105271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.105466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.105633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.105664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.105876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.106079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.106110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.106333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.106554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.106584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.106781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.106961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.106991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.107214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.107376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.107406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.107660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.108082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.108131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.108356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.108679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.108733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.109000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.109227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.109254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.109433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.109691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.109741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.109933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.110077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.110105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.110305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.110506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.110553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.110749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.110911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.110940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.111130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.111330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.111373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.111580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.111775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.111806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.112000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.112186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.112214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.112434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.112686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.112740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.112960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.113131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.113162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.113358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.113656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.113714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.113908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.114103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.114135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.114359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.114711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.114763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.114986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.115193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.115220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-07-27 01:35:06.115479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.115745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-07-27 01:35:06.115775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.115974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.116200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.116231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.116393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.116579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.116608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.116799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.116994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.117021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.117200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.117399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.117426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.117684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.118089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.118142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.118334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.118685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.118751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.118967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.119164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.119195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.119368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.119540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.119568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.119754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.119994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.120026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.120243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.120456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.120509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.120728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.120892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.120921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.121185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.121347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.121391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.121572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.121791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.121843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.122026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.122202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.122235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.122431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.122683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.122741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.122975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.123147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.123181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.123386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.123735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.123806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.124066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.124212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.124240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.124421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.124712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.124744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.124988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.125167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.125195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.125466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.125739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.125766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.125981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.126202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.126231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.126460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.126856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.126912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.127143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.127348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.127380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.127577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.127808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.127863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.128030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.128237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.128269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.128527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.128736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.128780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.128967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.129153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-07-27 01:35:06.129184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-07-27 01:35:06.129355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.129649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.129681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.129886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.130090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.130134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.130298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.130525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.130555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.130843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.131049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.131092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.131300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.131556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.131582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.131795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.132027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.132078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.132291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.132637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.132700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.132999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.133187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.133216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.133400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.133791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.133844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.134038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.134215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.134247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.134444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.134799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.134853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.135116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.135342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.135372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.135570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.135926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.135973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.136258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.136662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.136712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.136909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.137094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.137122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.137325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.137470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.137512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.137730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.137889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.137920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.138117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.138284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.138316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.138592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.138981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.139046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.139270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.139471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.139499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.139725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.139959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.139989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.140189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.140437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.140487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.140709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.140902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.140932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.141127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.141307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.141337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.141603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.141932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.141991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.142183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.142571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.142624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.142881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.143116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.143161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.143354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.143665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.143691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-07-27 01:35:06.143963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-07-27 01:35:06.144170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.144200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.144426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.144647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.144676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.144898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.145087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.145118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.145316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.145545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.145596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.145782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.146043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.146081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.146251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.146447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.146477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.146671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.146861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.146892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.147055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.147267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.147296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.147469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.147688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.147718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.147919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.148113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.148143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.148345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.148539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.148566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.148751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.148971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.149000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.149203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.149380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.149409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.149638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.149969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.150022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.150232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.150412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.150440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.150630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.150852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.150896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.151134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.151335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.151382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.151572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.151907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.151966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.152193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.152381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.152413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.152594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.152768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.152795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.152971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.153162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.153194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.153372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.153592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.153622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.153819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.154040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.154079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.154258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.154437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.154464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.154619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.154797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-07-27 01:35:06.154833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-07-27 01:35:06.155024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.155232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.155260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.155487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.155727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.155793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.155977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.156150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.156179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.156389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.156541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.156570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.156719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.156923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.156958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.157184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.157340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.157371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.157597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.157818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.157871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.158076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.158278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.158304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.158512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.158786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.158837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.159108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.159314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.159343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.159546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.159804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.159859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.160090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.160281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.160310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.160513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.160784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.160814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.161004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.161162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.161192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.161459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.161628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.161668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.161908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.162073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.162119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.162320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.162544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.162582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.162828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.162996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.163027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.163228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.163375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.163404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.163627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.163899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.163962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.164166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.164387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.164430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.164637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.164934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.164986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.165212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.165389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.165431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.165617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.165866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.165906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.166140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.166499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.166564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.166805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.167020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.167048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.167225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.167520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.167561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-07-27 01:35:06.167765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.167921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-07-27 01:35:06.167947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.168097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.168516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.168571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.168760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.168981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.169022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.169233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.169387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.169414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.169593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.169927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.170001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.170220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.170416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.170443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.170633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.170936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.170988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.171194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.171502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.171573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.171864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.172090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.172119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.172356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.172608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.172635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.172837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.173039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.173090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.173300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.173675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.173732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.173956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.174142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.174170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.174399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.174619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.174648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.174947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.175173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.175204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.175396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.175589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.175619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.175784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.176011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.176042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.176228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.176521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.176574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.176739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.176958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.176990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.177170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.177361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.177391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.177584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.177757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.177784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.177986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.178226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.178254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.178468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.178664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.178695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.178915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.179073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.179102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.179255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.179416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.179446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.179669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.179887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.179916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.180189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.180409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.180466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.180675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.180894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.180922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-07-27 01:35:06.181130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.181351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-07-27 01:35:06.181381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.181583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.181874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.181926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.182150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.182353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.182383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.182589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.182905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.182960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.183184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.183348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.183388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.183553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.183768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.183797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.184016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.184189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.184220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.184425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.184599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.184626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.184803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.185071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.185100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.185299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.185560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.185600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.185832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.186035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.186071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.186275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.186428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.186456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.186637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.186945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.187002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.187214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.187409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.187437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.187630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.187848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.187880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.188077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.188251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.188283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.188482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.188708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.188739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.188931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.189138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.189170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.189401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.189599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.189625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.189771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.189987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.190021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.190213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.190414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.190442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.190654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.190939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.190989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.191202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.191385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.191413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.191592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.191741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.191768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.191945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.192117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.192146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.192293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.192441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.192469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.192688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.192848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.192880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.193117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.193292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.193320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.193511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.193719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-07-27 01:35:06.193751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-07-27 01:35:06.193919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.194110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.194142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.194341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.194538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.194566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.194763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.194987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.195020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.195236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.195524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.195580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.195773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.195972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.196010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.196263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.196578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.196640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.196841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.197069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.197099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.197330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.197486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.197515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.197744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.197940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.197970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.198138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.198338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.198367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.198571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.198804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.198860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.199079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.199242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.199269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.199436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.199584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.199612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.199816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.200015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.200046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.200310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.200595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.200660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.200876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.201031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.201067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.201295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.201540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.201566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.201750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.201941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.201972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.202171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.202334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.202373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.202554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.202825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.202881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.203074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.203274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.203307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.203502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.203810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.203869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.204089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.204295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.204327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.204533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.204724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.204758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-07-27 01:35:06.204958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.205171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-07-27 01:35:06.205199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.205373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.205575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.205635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.205832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.206003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.206044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.206262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.206441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.206473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.206676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.206840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.206879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.207077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.207282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.207313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.207513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.207765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.207815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.208047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.208281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.208312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.208507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.208673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.208704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.208931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.209079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.209107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.209277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.209454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.209491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.209672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.209819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.209846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.210075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.210269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.210300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.210499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.210784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.210815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.211012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.211236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.211267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.211494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.211693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.211726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.211943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.212144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.212175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.212406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.212641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.212670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.212852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.213049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.213091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.213260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.213440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.213467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.213636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.214009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.214045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.214250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.214447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.214474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.214626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.214822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.214853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.215125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.215329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.215372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.215563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.215755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.215785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.215983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.216170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.216200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.216403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.216559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.216587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.216806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.217007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.217036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-07-27 01:35:06.217271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.217490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-07-27 01:35:06.217538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.217694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.217914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.217952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.218163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.218313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.218346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.218562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.218817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.218869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.219098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.219275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.219303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.219499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.219710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.219742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.219973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.220174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.220205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.220455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.220652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.220683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.220851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.222372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.222407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.222637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.222800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.222830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.223045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.223234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.223261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.223424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.223651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.223703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.223930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.224130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.224157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.224336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.224518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.224547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.224743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.224953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.224986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.225239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.225511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.225562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.225760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.225977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.226007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.226215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.226395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.226423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.226628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.226882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.226930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.227139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.227326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.227370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.227565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.227721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.227748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.227917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.228115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.228143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.228352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.228554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.228585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.228742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.228963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.228993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.229192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.229390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.229421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.229640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.229840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.229868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.230046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.230244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.230271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.230460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.230633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.230662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-07-27 01:35:06.230832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.230994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-07-27 01:35:06.231023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.231250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.231440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.231478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.231663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.231829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.231856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.232030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.232219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.232246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.232465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.232634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.232665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.232846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.233012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.233042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.233225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.233386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.233414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.233595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.233804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.233851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.234076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.234233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.234259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.234453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.234691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.234732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.234939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.235150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.235179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.235353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.235567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.235597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.235802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.236001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.236028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.236227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.236381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.236408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.236586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.236773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.236800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.236975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.237140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.237168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.237345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.237548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.237575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.237775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.237951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.237978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.238128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.238276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.238303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.238511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.238686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.238713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.238912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.239090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.239125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.239289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.239470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.239496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.239672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.239845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.239872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.240077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.240282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.240317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.240466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.240637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.240665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.240847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.241014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.241041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.241232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.241375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.241402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.241576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.241711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.241738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.241914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.242070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-07-27 01:35:06.242109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-07-27 01:35:06.242285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.242466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.242493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.242644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.242821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.242850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.243027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.243194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.243222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.243382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.243565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.243593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.243767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.243943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.243970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.244120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.244313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.244340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.244509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.244704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.244732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.244881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.245055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.245090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.245246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.245427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.245454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.245626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.245777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.245805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.245965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.246156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.246183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.246354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.246537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.246565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.246804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.247009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.247037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.247246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.247447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.247474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.247650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.247827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.247855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.248024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.248216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.248243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.248396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.248596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.248624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.248822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.249028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.249054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.249243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.249410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.249437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.249611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.249790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.249817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.249988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.250159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.250186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.250357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.250510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.250536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.250733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.250886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.250914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.251118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.251296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.251333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.251486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.251659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.251687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.251872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.252045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.252078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-07-27 01:35:06.252241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.252405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-07-27 01:35:06.252434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.252586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.252767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.252795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.252963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.253114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.253140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.253318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.253472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.253500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.253678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.253853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.253880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.254054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.254248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.254276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.254462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.254643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.254672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.254847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.255018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.255044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.255197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.255347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.255374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.255509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.255682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.255709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.255878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.256055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.256090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.256305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.256490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.256517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.256693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.256894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.256921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.257087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.257251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.257278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.257432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.257605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.257634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.257812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.258004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.258031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.258225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.258402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.258434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.258635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.258808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.258836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.259013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.259199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.259226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.259402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.259582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.259610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.259785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.259959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.259986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.260137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.260307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.260344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.260523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.260722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.260749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.542 [2024-07-27 01:35:06.260898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.261073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.542 [2024-07-27 01:35:06.261100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.542 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.261242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.261413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.261440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.261614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.261814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.261841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.262041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.262230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.262261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.262421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.262624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.262651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.262827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.263006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.263033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.263195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.263391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.263418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.263566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.263716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.263743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.263944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.264092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.264118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.264262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.264441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.264466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.264668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.264813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.264840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.265018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.265183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.265212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.265365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.265539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.265567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.265743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.265945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.265977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.266159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.266341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.266368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.266544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.266685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.266710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.266886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.267041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.267075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.267223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.267376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.267408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.267594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.267796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.267824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.268000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.268164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.268192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.268397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.268576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.268602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.543 [2024-07-27 01:35:06.268780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.268955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.543 [2024-07-27 01:35:06.268980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.543 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.269193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.269332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.269359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.269562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.269761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.269791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.269997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.270159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.270186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.270363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.270536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.270564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.270744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.270934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.270966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.271131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.271283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.271312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.271514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.271670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.271697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.271889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.272089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.272117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.272293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.272463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.272488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.272631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.272776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.272803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.272955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.273113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.273141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.273315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.273510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.273537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.273701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.273850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.273877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.274031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.274190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.274217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.274428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.274615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.274642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.274828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.275007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.275034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.815 [2024-07-27 01:35:06.275222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.275367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.815 [2024-07-27 01:35:06.275394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.815 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.275574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.275713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.275739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.275906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.276083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.276111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.276290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.276464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.276491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.276640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.276811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.276837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.277007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.277206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.277233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.277387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.277545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.277571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.277735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.277909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.277937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.278109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.278265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.278293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.278472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.278673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.278700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.278870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.279046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.279077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.279230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.279373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.279400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.279543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.279718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.279744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.279895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.280033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.280068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.280245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.280389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.280416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.280596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.280763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.280790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.280976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.281155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.281183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.281333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.281502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.281529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.281700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.281845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.281872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.282052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.282234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.282261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.282406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.282549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.282576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.282780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.282955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.282983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.283169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.283317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.283344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.283544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.283687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.283714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.283914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.284086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.284113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.284287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.284434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.284461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.284659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.284806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.284838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.285018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.285199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.285226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.285371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.285539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.816 [2024-07-27 01:35:06.285565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.816 qpair failed and we were unable to recover it. 00:27:14.816 [2024-07-27 01:35:06.285738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.285937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.285963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.286146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.286294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.286321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.286465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.286666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.286692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.286864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.287018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.287046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.287229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.287370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.287397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.287581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.287750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.287777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.287942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.288114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.288141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.288319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.288488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.288514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.288691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.288872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.288899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.289075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.289247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.289274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.289477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.289673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.289700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.289902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.290046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.290080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.290258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.290427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.290453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.290610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.290814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.290840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.291013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.291185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.291212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.291370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.291566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.291593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.291759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.291909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.291936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.292119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.292318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.292345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.292501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.292687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.292714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.292868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.293037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.293082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.293253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.293432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.293459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.293604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.293778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.293804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.293945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.294121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.294148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.294357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.294504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.294530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.294704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.294840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.294866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.295041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.295214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.295241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.295411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.295581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.295607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.295779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.295945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.295971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.817 qpair failed and we were unable to recover it. 00:27:14.817 [2024-07-27 01:35:06.296173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.817 [2024-07-27 01:35:06.296332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.296359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.296534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.296685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.296711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.296884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.297081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.297108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.297261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.297405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.297431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.297630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.297827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.297854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.298034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.298209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.298236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.298377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.298552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.298579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.298739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.298914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.298940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.299140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.299317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.299343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.299490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.299656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.299683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.299853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.300027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.300053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.300243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.300380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.300406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.300555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.300752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.300778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.300926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.301097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.301125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.301303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.301472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.301499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.301703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.301879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.301906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.302053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.302239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.302266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.302472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.302639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.302665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.302818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.302964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.302990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.303146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.303321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.303347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.303522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.303699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.303729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.303906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.304116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.304141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.304310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.304482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.304509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.304674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.304847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.304876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.305046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.305230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.305258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.305416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.305619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.305646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.305812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.305986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.306013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.306190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.306385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.306412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.818 [2024-07-27 01:35:06.306588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.306757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.818 [2024-07-27 01:35:06.306784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.818 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.306929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.307076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.307104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.307273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.307424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.307452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.307657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.307825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.307852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.307995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.308144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.308172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.308343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.308488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.308515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.308689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.308831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.308859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.309057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.309222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.309249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.309427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.309625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.309652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.309852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.309995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.310022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.310208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.310412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.310438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.310611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.310775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.310802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.310999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.311180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.311207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.311363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.311512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.311538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.311677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.311887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.311914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.312115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.312290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.312317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.312491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.312693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.312720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.312917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.313065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.313092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.313260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.313433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.313459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.313662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.313837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.313865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.314044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.314218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.314244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.314432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.314607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.314634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.314811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.315019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.315045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.315256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.315437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.315464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.315637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.315836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.315862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.316038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.316184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.316211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.316357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.316532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.316559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.819 qpair failed and we were unable to recover it. 00:27:14.819 [2024-07-27 01:35:06.316705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.316844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.819 [2024-07-27 01:35:06.316871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.317018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.317202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.317230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.317381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.317555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.317582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.317725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.317909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.317936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.318117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.318292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.318319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.318455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.318628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.318655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.318803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.318987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.319014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.319197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.319371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.319397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.319537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.319714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.319740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.319884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.320077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.320104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.320261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.320458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.320485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.320625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.320774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.320800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.321005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.321182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.321209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.321385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.321566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.321593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.321760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.321910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.321937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.322098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.322271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.322298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.322494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.322634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.322664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.322863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.323009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.323036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.323193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.323387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.323413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.323563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.323730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.323756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.323923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.324072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.324099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.324276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.324480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.324506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.324687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.324864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.324891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.325090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.325267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.325294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.325435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.325584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.325612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.820 qpair failed and we were unable to recover it. 00:27:14.820 [2024-07-27 01:35:06.325789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.325960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.820 [2024-07-27 01:35:06.325986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.326187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.326364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.326391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.326578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.326755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.326782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.326934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.327119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.327145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.327342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.327527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.327554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.327693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.327892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.327919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.328096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.328274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.328300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.328446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.328626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.328652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.328826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.328965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.328991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.329164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.329312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.329338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.329518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.329664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.329690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.329832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.330010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.330037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.330257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.330454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.330479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.330660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.330857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.330883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.331075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.331281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.331308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.331474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.331648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.331675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.331853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.332054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.332085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.332237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.332395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.332421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.332590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.332735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.332761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.332961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.333111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.333138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.333303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.333499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.333525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.333702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.333838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.333864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.334039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.334232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.334258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.334436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.334587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.334615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.334791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.334961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.334987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.335161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.335337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.335363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.335558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.335703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.335730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.335874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.336021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.336047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.821 [2024-07-27 01:35:06.336226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.336360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.821 [2024-07-27 01:35:06.336386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.821 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.336530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.336706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.336732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.336930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.337099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.337125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.337270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.337438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.337464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.337602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.337774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.337804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.337981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.338173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.338199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.338348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.338516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.338542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.338680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.338852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.338877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.339044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.339191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.339218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.339362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.339533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.339559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.339759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.339921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.339947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.340127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.340277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.340303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.340474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.340673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.340699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.340875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.341046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.341088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.341264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.341419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.341445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.341622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.341770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.341796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.341990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.342191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.342218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.342390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.342528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.342554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.342692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.342893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.342919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.343071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.343246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.343272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.343422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.343623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.343648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.343847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.344017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.344043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.344219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.344393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.344419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.344568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.344770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.344796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.344983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.345126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.345153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.345333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.345505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.345530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.345703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.345867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.345893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.346073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.346221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.346247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.346388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.346561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.822 [2024-07-27 01:35:06.346587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.822 qpair failed and we were unable to recover it. 00:27:14.822 [2024-07-27 01:35:06.346723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.346859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.346884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.347065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.347261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.347287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.347457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.347633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.347659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.347826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.348022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.348048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.348211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.348383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.348409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.348608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.348784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.348810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.348996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.349176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.349203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.349376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.349572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.349597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.349770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.349946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.349972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.350146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.350348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.350375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.350575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.350719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.350745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.350889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.351090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.351116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.351283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.351495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.351521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.351697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.351852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.351877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.352076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.352252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.352278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.352449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.352642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.352667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.352869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.353045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.353090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.353291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.353437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.353463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.353663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.353848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.353874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.354074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.354262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.354288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.354433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.354629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.354654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.354825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.355021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.355047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.355209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.355408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.355433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.355578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.355751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.355777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.355950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.356118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.356144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.356319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.356490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.356516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.356692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.356885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.356914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.823 qpair failed and we were unable to recover it. 00:27:14.823 [2024-07-27 01:35:06.357086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.823 [2024-07-27 01:35:06.357232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.357258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.357420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.357592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.357620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.357823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.357997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.358023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.358176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.358343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.358369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.358539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.358736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.358762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.358957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.359158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.359185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.359360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.359569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.359594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.359771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.359944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.359969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.360125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 754140 Killed "${NVMF_APP[@]}" "$@" 00:27:14.824 [2024-07-27 01:35:06.360322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.360352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.360522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 01:35:06 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:27:14.824 [2024-07-27 01:35:06.360666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.360692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 01:35:06 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:14.824 01:35:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:14.824 [2024-07-27 01:35:06.360891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 01:35:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:14.824 [2024-07-27 01:35:06.361075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.361102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 01:35:06 -- common/autotest_common.sh@10 -- # set +x 00:27:14.824 [2024-07-27 01:35:06.361249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.361402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.361428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.361583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.361735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.361761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.361938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.362081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.362110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.362276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.362447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.362473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.362647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.362826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.362852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.363020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.363225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.363251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.363449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.363589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.363615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.363754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.363927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.363953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 [2024-07-27 01:35:06.364154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.364324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 [2024-07-27 01:35:06.364350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.824 qpair failed and we were unable to recover it. 00:27:14.824 01:35:06 -- nvmf/common.sh@469 -- # nvmfpid=754843 00:27:14.824 [2024-07-27 01:35:06.364501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.824 01:35:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:14.825 01:35:06 -- nvmf/common.sh@470 -- # waitforlisten 754843 00:27:14.825 [2024-07-27 01:35:06.364646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.364673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 01:35:06 -- common/autotest_common.sh@819 -- # '[' -z 754843 ']' 00:27:14.825 [2024-07-27 01:35:06.364870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 01:35:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.825 [2024-07-27 01:35:06.365039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 01:35:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:14.825 [2024-07-27 01:35:06.365072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 01:35:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.825 [2024-07-27 01:35:06.365261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 01:35:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:14.825 01:35:06 -- common/autotest_common.sh@10 -- # set +x 00:27:14.825 [2024-07-27 01:35:06.365431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.365458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.365658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.365859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.365885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.366052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.366210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.366237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.366372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.366573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.366599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.366803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.366970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.366997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.367178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.367314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.367339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.367524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.367721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.367746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.367947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.368149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.368175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.368349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.368548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.368574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.368746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.368963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.368990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.369166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.369313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.369340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.369491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.369679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.369706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.369873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.370053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.370086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.370263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.370432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.370458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.370598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.370778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.370804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.371010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.371184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.371210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.371402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.371582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.371608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.371794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.371963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.371988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.372162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.372316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.372341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.372487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.372662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.372687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.372893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.373105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.373131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.373323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.373493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.373518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.373664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.373807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.373833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.374003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.374189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.374214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.825 qpair failed and we were unable to recover it. 00:27:14.825 [2024-07-27 01:35:06.374376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.825 [2024-07-27 01:35:06.374551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.374577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.374732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.374945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.374971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.375125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.375298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.375332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.375508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.375679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.375705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.375881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.376088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.376115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.376297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.376452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.376478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.376623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.376790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.376815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.376991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.377175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.377201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.377340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.377504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.377530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.377701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.377874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.377900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.378049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.378220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.378246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.378397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.378575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.378605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.378757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.378955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.378981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.379167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.379337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.379363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.379527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.379676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.379702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.379873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.380037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.380069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.380335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.380499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.380524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.380709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.380895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.380920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.381087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.381229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.381256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.381403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.381549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.381575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.381718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.381904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.381930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.382131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.382283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.382309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.382484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.382654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.382680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.382857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.383026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.383052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.383236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.383387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.383412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.383580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.383782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.383807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.383950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.384134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.384161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.384315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.384454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.384480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.826 qpair failed and we were unable to recover it. 00:27:14.826 [2024-07-27 01:35:06.384680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.826 [2024-07-27 01:35:06.384826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.384853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.385052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.385211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.385237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.385385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.385533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.385559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.385714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.385914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.385940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.386135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.386387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.386412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.386564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.386736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.386762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.386932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.387146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.387173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.387325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.387469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.387494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.387645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.387846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.387872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.388018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.388170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.388196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.388376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.388573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.388599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.388773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.388945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.388971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.389145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.389322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.389348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.389492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.389633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.389659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.389804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.389985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.390012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.390160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.390331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.390357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.390507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.390688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.390713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.390887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.391078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.391105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.391280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.391449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.391475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.391615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.391755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.391781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.391954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.392120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.392147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.392323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.392517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.392543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.392715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.392873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.392899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.393069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.393219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.393245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.393421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.393563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.393593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.393792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.393942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.393970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.827 [2024-07-27 01:35:06.394175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.394382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.827 [2024-07-27 01:35:06.394408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.827 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.394595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.394782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.394809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.394989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.395154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.395181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.395329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.395469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.395494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.395647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.395818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.395844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.396045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.396224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.396251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.396429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.396570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.396596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.396796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.396947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.396973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.397144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.397310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.397340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.397530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.397703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.397729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.397936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.398113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.398140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.398284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.398456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.398482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.398647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.398795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.398821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.399016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.399221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.399247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.399421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.399565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.399591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.399743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.399913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.399939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.400144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.400309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.400335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.400507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.400677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.400703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.400850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.401024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.401049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.401245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.401423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.401448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.401620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.401798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.401824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.402002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.402175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.402203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.402360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.402535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.402562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.402746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.402896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.402921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.403068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.403244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.403271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.403444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.403581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.403607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.403786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.403958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.403983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.404160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.404332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.404358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.404499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.404708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.404734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.828 qpair failed and we were unable to recover it. 00:27:14.828 [2024-07-27 01:35:06.404933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.828 [2024-07-27 01:35:06.405130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.405157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.405358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.405525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.405551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.405730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.405877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.405903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.406088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.406265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.406291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.406475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.406966] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:14.829 [2024-07-27 01:35:06.407031] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.829 [2024-07-27 01:35:06.407234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.407263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.407485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.407659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.407685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.407859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.408038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.408071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.408225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.408396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.408422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.408598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.408793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.408818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.408999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.409179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.409206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.409378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.409549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.409575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.409743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.409913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.409939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.410093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.410230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.410256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.410405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.410569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.410594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.410763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.410960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.410985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.411163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.411345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.411371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.411581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.411732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.411757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.411962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.412117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.412145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.412346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.412527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.412553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.412728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.412897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.412927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.413100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.413268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.413294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.413496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.413637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.413663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.413877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.414049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.414084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.414257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.414397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.829 [2024-07-27 01:35:06.414422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.829 qpair failed and we were unable to recover it. 00:27:14.829 [2024-07-27 01:35:06.414591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.414775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.414800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.415008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.415153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.415179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.415366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.415560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.415586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.415759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.415928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.415954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.416120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.416282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.416308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.416501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.416785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.416811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.417017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.417217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.417244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.417421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.417571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.417597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.417770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.417915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.417941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.418086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.418229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.418255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.418425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.418602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.418627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.418803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.418970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.418996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.419176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.419350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.419376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.419520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.419721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.419746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.419915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.420068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.420094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.420240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.420439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.420465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.420670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.420815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.420841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.420993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.421132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.421164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.421327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.421514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.421541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.421742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.421892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.421917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.422092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.422269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.422295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.422472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.422640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.422666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.422852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.423020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.423045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.423196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.423367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.423393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.423601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.423780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.423805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.424010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.424187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.424214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.424386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.424564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.424590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-07-27 01:35:06.424787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.424956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-07-27 01:35:06.424982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.425169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.425339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.425371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.425572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.425770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.425796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.425974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.426141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.426168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.426349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.426556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.426581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.426779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.426950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.426975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.427154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.427330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.427358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.427562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.427739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.427766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.427938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.428137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.428164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.428305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.428481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.428510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.428691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.428861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.428886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.429065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.429242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.429268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.429422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.429598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.429625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.429795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.429996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.430022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.430223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.430395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.430420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.430569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.430755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.430781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.430978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.431153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.431181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.431328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.431489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.431515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.431719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.431859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.431884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.432064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.432232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.432257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.432409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.432612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.432637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.432814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.432986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.433011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.433196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.433389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.433415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.433615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.433783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.433808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.433958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.434132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.434159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.434330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.434528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.434553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.434703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.434845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.434870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.435045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.435221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.435247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.435381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.435516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-07-27 01:35:06.435541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-07-27 01:35:06.435721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.435867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.435892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.436044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.436223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.436249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.436443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.436618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.436644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.436816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.436957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.436983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.437154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.437330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.437361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.437538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.437736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.437761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.437937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.438112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.438138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.438311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.438455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.438480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.438680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.438832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.438858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.439037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.439203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.439230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.439402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.439539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.439565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.439768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.439966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.439992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.440179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.440351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.440377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.440556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.440703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.440729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.440914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.441113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.441139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.441312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.441457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.441482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.441653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.441821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.441846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.442012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.442190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.442216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.442425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.442593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.442618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.442779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.442947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.442972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.443156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.443327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.443352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.443531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.443733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.443758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.443942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.444118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.444145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.444328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.444478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.444504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.444686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.444892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.444918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.445072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.445273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.445300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.445493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.445665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.445691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.445889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.446027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.446065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-07-27 01:35:06.446253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.446460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-07-27 01:35:06.446485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.446673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.833 [2024-07-27 01:35:06.446869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.446894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.447089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.447263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.447289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.447470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.447643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.447672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.447871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.448046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.448077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.448251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.448420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.448445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.448615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.448758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.448784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.448929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.449101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.449128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.449302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.449455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.449481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.449640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.449782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.449808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.449999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.450175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.450201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.450357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.450559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.450585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.450769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.450943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.450970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.451157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.451342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.451377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.451557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.451742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.451768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.451936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.452141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.452167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.452315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.452495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.452520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.452733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.452905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.452930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.453103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.453273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.453299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.453504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.453673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.453699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.453834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.454013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.454038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.454269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.454464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.454493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.454637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.454789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.454815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.454985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.455161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.455188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.455376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.455526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.455553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.455731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.455903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.455929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.456081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.456257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.456283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.456472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.456650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.456676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-07-27 01:35:06.456875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.457050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-07-27 01:35:06.457082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.457256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.457398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.457424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.457597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.457798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.457824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.457997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.458232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.458260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.458450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.458599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.458626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.458770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.458953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.458980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.459144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.459321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.459347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.459547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.459723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.459749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.459919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.460072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.460100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.460274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.460473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.460499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.460695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.460868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.460896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.461048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.461231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.461258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.461435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.461571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.461597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.461809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.461954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.461980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.462180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.462359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.462385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.462559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.462767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.462793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.462946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.463147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.463174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.463344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.463514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.463539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.463712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.463857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.463884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.464025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.464206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.464232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.464404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.464601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.464626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.464779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.464981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.465007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.465188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.465357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.465383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.465557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.465709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.465735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.465934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.466097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.466123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-07-27 01:35:06.466275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-07-27 01:35:06.466454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.466481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.466630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.466805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.466831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.467002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.467186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.467212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.467390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.467532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.467558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.467698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.467855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.467880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.468028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.468214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.468240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.468394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.468590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.468615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.468764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.468931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.468956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.469135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.469285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.469311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.469494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.469643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.469669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.469836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.470002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.470028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.470092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9504b0 (9): Bad file descriptor 00:27:14.835 [2024-07-27 01:35:06.470309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.470507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.470536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.470705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.470880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.470906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.471054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.471231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.471257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.471427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.471573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.471598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.471769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.471933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.471959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.472138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.472291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.472317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.472504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.472638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.472664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.472819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.472962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.472988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.473161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.473305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.473330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.473509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.473680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.473705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.473887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.474024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.474066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.474219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.474387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.474412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.474592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.474741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.474767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.474933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.475076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.475103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.475253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.475421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.475447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.475625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.475793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.475819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.476020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.476180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-07-27 01:35:06.476206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-07-27 01:35:06.476358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.476529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.476555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.476705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.476905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.476930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.477132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.477270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.477295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.477450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.477615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.477641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.477807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.477982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.478008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.478176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.478357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.478383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.478568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.478748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.478773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.478944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.479127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.479154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.479328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.479497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.479523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.479683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.479819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.479844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.480022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.480212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.480239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.480412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.480558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.480586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.480732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.480902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.480929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.481106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.481288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.836 [2024-07-27 01:35:06.481293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.481321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.481474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.481648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.481674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.481953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.482276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.482302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.482482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.482657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.482683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.482865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.483020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.483064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.483219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.483413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.483439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.483614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.483760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.483786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.483987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.484186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.484212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.484383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.484580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.484606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.484776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.484949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.484975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.485127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.485421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.485446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.485676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.485827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.485852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.486003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.486201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.486228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.486383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.486566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.486592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.486739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.486893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.486919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-07-27 01:35:06.487068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-07-27 01:35:06.487254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.487280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.487457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.487632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.487659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.487833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.487975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.488002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.488189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.488382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.488408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.488606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.488756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.488782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.488984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.489143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.489175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.489323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.489499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.489524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.489711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.489881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.489920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.490109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.490289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.490315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.490469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.490611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.490637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.490816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.490990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.491016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.491215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.491386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.491412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.491592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.491765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.491791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.491990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.492168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.492195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.492439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.492635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.492661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.492807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.493006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.493031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.493245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.493387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.493412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.493596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.493762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.493787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.493964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.494155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.494184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.494370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.494542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.494570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.494748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.494925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.494951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.495143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.495317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.495351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.495574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.495776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.495803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.495954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.496136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.496162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.496336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.496487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.496513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.496712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.496910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.496937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.497119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.497305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.497331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.497529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.497714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.497741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-07-27 01:35:06.497920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-07-27 01:35:06.498085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.498112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.498263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.498413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.498441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.498597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.498746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.498774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.498917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.499096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.499123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.499300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.499447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.499474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.499673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.499848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.499874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.500038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.500206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.500233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.500404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.500597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.500624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.500796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.500949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.500976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.501167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.501356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.501384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.501586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.501727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.501754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.501905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.502080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.502107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.502303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.502468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.502493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.502663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.502833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.502859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.503035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.503206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.503233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.503405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.503552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.503578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.503751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.503919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.503946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.504083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.504224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.504251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.504399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.504589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.504615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.504762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.504929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.504956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.505152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.505328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.505360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.505530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.505696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.505722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.505899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.506045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.506081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.506272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.506417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.506443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.506592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.506757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.506784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.506978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.507166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.507193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-07-27 01:35:06.507337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.507510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-07-27 01:35:06.507536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.507702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.507881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.507907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.508116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.508288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.508319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.508478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.508651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.508677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.508843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.509017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.509054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.509230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.509400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.509427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.509601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.509804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.509830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.510006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.510150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.510177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.510351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.510538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.510564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.510737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.510917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.510943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.511136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.511283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.511309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.511490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.511670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.511695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.511870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.512055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.512085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.512290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.512473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.512499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.512637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.512811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.512836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.513013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.513182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.513209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.513358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.513558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.513584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.513782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.513969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.513994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.514172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.514373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.514398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.514536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.514675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.514701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.514878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.515082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.515109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.515247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.515419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.515446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.515654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.515827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.515853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.516029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.516221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.516247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.516425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.516590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.516615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.516790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.516937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.516962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.517131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.517306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-07-27 01:35:06.517332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-07-27 01:35:06.517487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.517658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.517684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.517858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.518026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.518069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.518255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.518431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.518458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.518595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.518765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.518791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.518993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.519166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.519193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.519339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.519488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.519514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.519690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.519845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.519871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.520045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.520257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.520284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.520433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.520618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.520644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.520819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.520995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.521021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.521234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.521457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.521487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.521694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.521874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.521901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.522105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.522289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.522317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.522501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.522672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.522697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.522877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.523026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.523053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.523230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.523403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.523428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.523627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.523774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.523800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.523975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.524152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.524181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.524382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.524525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.524552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.524696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.524868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.524895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.525068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.525215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.525243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.525417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.525619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.525645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.525846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.525991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.526018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.526208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.526388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.526415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.526589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.526765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.526791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.526941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.527150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.527177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.527348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.527493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.527524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.527731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.527904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.527931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-07-27 01:35:06.528106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-07-27 01:35:06.528289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.528316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.528492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.528672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.528699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.528902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.529102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.529130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.529288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.529477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.529504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.529677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.529851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.529878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.530023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.530198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.530226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.530374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.530552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.530581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.530781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.530938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.530964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.531143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.531311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.531342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.531516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.531691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.531719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.531894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.532067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.532095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.532298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.532474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.532500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.532646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.532797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.532826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.532997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.533172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.533201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.533383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.533590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.533616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.533788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.533963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.534006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.534169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.534338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.534365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.534537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.534713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.534741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.534926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.535111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.535143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.535318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.535491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.535519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.535696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.535899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.535926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.536098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.536254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.536281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.536428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.536599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.536626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.536781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.536957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.536984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.537135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.537307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.537335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.537490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.537663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.537690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.537847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.538047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.538080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.538252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.538534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.538560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-07-27 01:35:06.538705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-07-27 01:35:06.538879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.538910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.539086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.539275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.539303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.539474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.539657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.539686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.539940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.540180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.540207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.540387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.540564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.540591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.540788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.540987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.541013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.541193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.541342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.541369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.541522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.541693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.541719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.541958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.542169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.542196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.542372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.542628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.542669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.542902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.543055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.543101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.543355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.543532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.543560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.543786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.543957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.543982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.544128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.544332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.544358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.544634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.544812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.544838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.545011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.545212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.545239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.545398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.545555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.545583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.545755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.545931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.545959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.546136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.546309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.546336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.546536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.546701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.546727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.546876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.547049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.547081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.547230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.547409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.547435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.547612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.547824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.547850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.548025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.548211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.548238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.548415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.548592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.548617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.548817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.548988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.549015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.549173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.549371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.549398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.549559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.549743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.549771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-07-27 01:35:06.549950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.550104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-07-27 01:35:06.550132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.550309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.550478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.550503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.550686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.550830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.550857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.551068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.551216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.551242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.551424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.551575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.551601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.551744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.551921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.551962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.552181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.552330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.552356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.552561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.552761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.552788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.552959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.553112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.553139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.553288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.553458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.553484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.553660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.553831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.553857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.554032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.554209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.554235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.554443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.554629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.554656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.554860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.555039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.555073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.555247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.555424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.555451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.555656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.555829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.555855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.556016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.556221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.556251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.556451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.556619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.556646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.556810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.557027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.557053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.557246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.557447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.557485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.557625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.557822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.557849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.557993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.558171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.558210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-07-27 01:35:06.558355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.558522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-07-27 01:35:06.558549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.558728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.558940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.558967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.559144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.559316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.559342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.559577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.559728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.559754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.559908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.560116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.560143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.560285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.560480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.560506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.560679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.560847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.560873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.561018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.561205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.561232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.561408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.561550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.561576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.561749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.561949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.561975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.562179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.562382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.562409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.562563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.562701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.562728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.562873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.563047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.563079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.563222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.563373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.563401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.563575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.563758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.563785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.563964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.564135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.564162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.564330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.564502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.564529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.564681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.564856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.564883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.565088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.565281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.565312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.565497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.565668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.565695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.565872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.566067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.566094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.566287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.566472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.566502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.566706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.566882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.116 [2024-07-27 01:35:06.566908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.116 qpair failed and we were unable to recover it. 00:27:15.116 [2024-07-27 01:35:06.567087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.567284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.567310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.567487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.567659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.567685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.567854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.568031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.568063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.568264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.568460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.568487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.568664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.568835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.568861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.569064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.569235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.569262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.569467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.569635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.569662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.569849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.570022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.570049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.570237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.570398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.570425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.570598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.570799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.570827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.571000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.571202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.571230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.571382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.571559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.571586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.571765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.571967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.571994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.572172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.572373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.572399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.572569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.572724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.572752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.572930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.573104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.573130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.573342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.573514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.573541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.573686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.573860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.573888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.574068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.574229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.574256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.574433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.574612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.574639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.574788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.574963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.574990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.575169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.575345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.575372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.575542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.575728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.575754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.575955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.576108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.576136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.576307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.576481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.576508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.576681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.576855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.576883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.577084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.577259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.577286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.117 qpair failed and we were unable to recover it. 00:27:15.117 [2024-07-27 01:35:06.577463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.117 [2024-07-27 01:35:06.577632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.577659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.577840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.578055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.578087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.578264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.578435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.578462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.578660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.578810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.578836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.579007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.579156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.579183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.579361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.579539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.579566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.579745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.579894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.579921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.580096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.580263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.580290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.580464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.580633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.580659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.580829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.580967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.580992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.581165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.581345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.581370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.581545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.581715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.581745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.581923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.582073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.582102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.582276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.582482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.582509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.582676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.582845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.582871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.583043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.583254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.583281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.583432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.583606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.583632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.583806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.583979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.584007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.584168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.584339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.584366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.584553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.584720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.584746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.584913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.585116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.585143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.585315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.585514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.585540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.585715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.585889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.585915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.586100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.586250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.586277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.586457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.586651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.586677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.586854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.587027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.587053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.587234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.587388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.587416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.118 qpair failed and we were unable to recover it. 00:27:15.118 [2024-07-27 01:35:06.587615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.118 [2024-07-27 01:35:06.587787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.587812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.587982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.588123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.588150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.588344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.588515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.588541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.588747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.588942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.588968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.589136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.589281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.589308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.589492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.589660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.589686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.589859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.590052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.590083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.590251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.590387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.590413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.590568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.590739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.590765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.590961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.591132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.591159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.591304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.591499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.591524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.591698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.591847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.591873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.592044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.592227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.592253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.592393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.592562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.592588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.592760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.592907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.592933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.593132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.593283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.593310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.593482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.593679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.593705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.593880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.594062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.594089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.594257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.594434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.594460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.594630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.594811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.594837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.595012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.595194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.595221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.595428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.595591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.595617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.595798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.595969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.595994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.596142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.596291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.596316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.119 qpair failed and we were unable to recover it. 00:27:15.119 [2024-07-27 01:35:06.596458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.596632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.119 [2024-07-27 01:35:06.596657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.596826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.597027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.597053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.597207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.597352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.597377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.597529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.597749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.597774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.597914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.598069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.598095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.598160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:15.120 [2024-07-27 01:35:06.598245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.598289] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.120 [2024-07-27 01:35:06.598325] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.120 [2024-07-27 01:35:06.598341] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.120 [2024-07-27 01:35:06.598393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.598417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.598393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:15.120 [2024-07-27 01:35:06.598447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:15.120 [2024-07-27 01:35:06.598450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:15.120 [2024-07-27 01:35:06.598422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:15.120 [2024-07-27 01:35:06.598588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.598755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.598781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.598945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.599109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.599136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.599280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.599426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.599452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.599620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.599763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.599793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.599970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.600147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.600173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.600329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.600506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.600532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.600685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.600937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.600963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.601130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.601278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.601304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.601454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.601618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.601644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.601787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.601949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.601975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.602151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.602324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.602349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.602525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.602698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.602724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.602925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.603072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.603097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.603274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.603480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.603512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.603659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.603833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.603870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.604023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.604224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.604251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.120 qpair failed and we were unable to recover it. 00:27:15.120 [2024-07-27 01:35:06.604402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.604565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.120 [2024-07-27 01:35:06.604592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.604867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.605096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.605126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.605303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.605465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.605491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.605628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.605776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.605803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.605981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.606151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.606179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.606359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.606552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.606579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.606745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.607010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.607037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.607205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.607385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.607410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.607564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.607740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.607766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.607951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.608138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.608164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.608342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.608519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.608546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.608696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.608864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.608891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.609037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.609209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.609235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.609417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.609573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.609608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.609777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.609986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.610013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.610165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.610321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.610347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.610490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.610666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.610692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.610884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.611048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.611114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.611262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.611407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.611434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.611609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.611787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.121 [2024-07-27 01:35:06.611814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.121 qpair failed and we were unable to recover it. 00:27:15.121 [2024-07-27 01:35:06.611981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.612160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.612188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.612336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.612511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.612538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.612687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.612836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.612862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.613006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.613272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.613298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.613484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.613630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.613656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.613840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.613998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.614025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.614208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.614370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.614397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.614572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.614751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.614777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.614961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.615133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.615159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.615420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.615576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.615602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.615783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.615954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.615980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.616172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.616314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.616340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.616537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.616679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.616706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.616881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.617111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.617138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.617387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.617553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.617580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.617739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.617916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.617943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.618146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.618409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.618436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.618642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.618837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.618863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.619005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.619159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.619189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.619360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.619507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.619533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.619706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.619849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.619885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.620048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.620249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.620275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.620416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.620581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.620608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.620776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.620964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.620991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.122 qpair failed and we were unable to recover it. 00:27:15.122 [2024-07-27 01:35:06.621145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.621366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.122 [2024-07-27 01:35:06.621392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.621565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.621840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.621867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.622075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.622232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.622267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.622420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.622595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.622621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.622776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.622957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.622984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.623161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.623311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.623339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.623529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.623675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.623702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.623856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.624001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.624028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.624213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.624356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.624391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.624545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.624694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.624723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.624881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.625031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.625066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.625241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.625387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.625413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.625614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.625768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.625796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.625942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.626084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.626123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.626381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.626556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.626582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.626755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.626922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.626949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.627121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.627290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.627315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.627501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.627670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.627696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.627878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.628050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.628082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.628258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.628401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.628428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.628599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.628765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.628791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.629069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.629225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.629250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.629428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.629601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.629628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.629829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.629965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.629992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.630139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.630319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.630346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.123 qpair failed and we were unable to recover it. 00:27:15.123 [2024-07-27 01:35:06.630525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.123 [2024-07-27 01:35:06.630718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.630754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.630930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.631107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.631135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.631326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.631531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.631557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.631707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.631857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.631884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.632069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.632210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.632236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.632408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.632585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.632614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.632762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.632913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.632939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.633110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.633256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.633283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.633442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.633608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.633635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.633778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.633941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.633967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.634200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.634363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.634391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.634535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.634703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.634730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.634901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.635088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.635123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.635288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.635443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.635470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.635641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.635777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.635804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.636007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.636154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.636184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.636349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.636513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.636540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.636701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.636872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.636898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.637076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.637217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.637244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.637402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.637589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.637615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.637770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.637938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.637968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.638147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.638300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.638328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.638473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.638737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.638763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.638916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.639069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.639096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.639388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.639584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.639610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.124 [2024-07-27 01:35:06.639807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.639956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.124 [2024-07-27 01:35:06.639983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.124 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.640243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.640397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.640423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.640583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.640855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.640882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.641050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.641207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.641234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.641386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.641562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.641589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.641734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.641909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.641936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.642089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.642239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.642265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.642432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.642634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.642661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.642805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.642986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.643012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.643184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.643338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.643365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.643501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.643640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.643666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.643835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.644001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.644028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.644202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.644346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.644373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.644515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.644662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.644690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.644860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.645066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.645093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.645267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.645445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.645471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.645619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.645784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.645811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.645950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.646098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.646125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.646266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.646412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.646438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.646589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.646739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.646771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.646974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.647143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.647170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.647353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.647544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.647571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.647737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.647893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.647919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.648090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.648294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.648321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.648618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.648793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.648819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.649023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.649183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.649210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.649357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.649528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.125 [2024-07-27 01:35:06.649555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.125 qpair failed and we were unable to recover it. 00:27:15.125 [2024-07-27 01:35:06.649719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.649865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.649900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.650065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.650211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.650238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.650388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.650586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.650612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.650793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.650965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.650992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.651181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.651355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.651381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.651555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.651727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.651754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.651940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.652115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.652142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.652315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.652465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.652492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.652632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.652782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.652809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.652988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.653136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.653167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.653317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.653494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.653523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.653679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.653884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.653911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.654076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.654218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.654244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.654407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.654553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.654579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.654768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.654936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.654962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.655106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.655271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.655297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.655457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.655621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.655648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.655827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.655971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.656000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.656151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.656327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.656354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.656530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.656672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.656710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.656859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.657031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.657063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.657239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.657410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.657436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.657578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.657748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.657775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.126 qpair failed and we were unable to recover it. 00:27:15.126 [2024-07-27 01:35:06.657934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.126 [2024-07-27 01:35:06.658086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.658114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.658260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.658429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.658455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.658605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.658743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.658769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.658902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.659071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.659098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.659251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.659442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.659469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.659644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.659796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.659824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.659965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.660162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.660190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.660352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.660497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.660524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.660669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.660856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.660882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.661111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.661287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.661314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.661463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.661743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.661769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.661914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.662079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.662106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.662254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.662421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.662448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.662597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.662857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.662883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.663040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.663197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.663224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.663388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.663606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.663632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.663813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.663957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.663983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.664131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.664277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.664305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.664479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.664653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.664679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.664856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.664990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.665016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.665168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.665438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.665465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.665653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.665855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.665881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.666065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.666221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.666247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.666401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.666551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.666578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.666748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.666886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.666913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.127 qpair failed and we were unable to recover it. 00:27:15.127 [2024-07-27 01:35:06.667082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.127 [2024-07-27 01:35:06.667251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.667277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.667420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.667593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.667620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.667789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.667942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.667970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.668175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.668319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.668346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.668493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.668667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.668694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.668847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.669017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.669044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.669270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.669443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.669470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.669614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.669875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.669902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.670083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.670230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.670257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.670427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.670576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.670603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.670746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.670924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.670950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.671127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.671269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.671296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.671439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.671641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.671672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.671831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.672001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.672026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.672177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.672321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.672348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.672520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.672667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.672694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.672896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.673085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.673113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.673262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.673436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.673463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.673619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.673777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.673804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.673966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.674131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.674158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.674305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.674475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.674502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.674644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.674815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.674842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.675044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.675190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.675224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.675386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.675663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.675690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.675848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.675998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.676024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.128 qpair failed and we were unable to recover it. 00:27:15.128 [2024-07-27 01:35:06.676202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.676370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.128 [2024-07-27 01:35:06.676402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.676584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.676753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.676779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.676948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.677119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.677146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.677289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.677448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.677474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.677616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.677787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.677813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.677989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.678131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.678159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.678363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.678534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.678561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.678745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.678922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.678948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.679100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.679250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.679278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.679454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.679605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.679631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.679805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.679965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.679991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.680133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.680294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.680321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.680510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.680659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.680686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.680874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.681025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.681053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.681208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.681366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.681392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.681548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.681735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.681761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.681929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.682103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.682139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.682288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.682423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.682450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.682630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.682780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.682806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.683094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.683256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.683283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.683451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.683613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.683639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.683810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.683977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.684004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.684176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.684346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.684373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.684550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.684696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.684722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.684868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.685028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.129 [2024-07-27 01:35:06.685055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.129 qpair failed and we were unable to recover it. 00:27:15.129 [2024-07-27 01:35:06.685274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.685439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.685466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.685637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.685792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.685818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.685995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.686148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.686175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.686354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.686535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.686562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.686728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.686902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.686929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.687088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.687265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.687292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.687450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.687624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.687651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.687822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.687975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.688013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.688198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.688365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.688392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.688598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.688740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.688767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.688925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.689096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.689124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.689299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.689438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.689465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.689654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.689825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.689851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.689996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.690161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.690198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.690376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.690526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.690553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.690731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.690933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.690960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.691138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.691314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.691341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.691513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.691697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.691723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.691899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.692069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.692096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.692244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.692390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.692417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.692553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.692722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.692749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.692893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.693036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.693090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.693268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.693427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.693453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.693598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.693766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.693793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.693947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.694098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.694126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.694298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.694452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.694478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.130 qpair failed and we were unable to recover it. 00:27:15.130 [2024-07-27 01:35:06.694634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.694793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.130 [2024-07-27 01:35:06.694819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.694976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.695176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.695203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.695359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.695502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.695529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.695697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.695871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.695898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.696050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.696236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.696263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.696434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.696594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.696621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.696845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.696988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.697018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.697195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.697370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.697397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.697553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.697707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.697736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.697911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.698056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.698089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.698290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.698462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.698489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.698667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.698815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.698842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.698989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.699215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.699242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.699447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.699591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.699618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.699763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.699905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.699932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.700131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.700305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.700331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.700504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.700701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.700727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.700896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.701040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.701088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.701248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.701438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.701465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.701612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.701780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.701807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.701976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.702150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.702177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.702341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.702513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.702540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.702694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.702839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.702865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.703011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.703237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.703264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.131 qpair failed and we were unable to recover it. 00:27:15.131 [2024-07-27 01:35:06.703413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.703555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.131 [2024-07-27 01:35:06.703581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.703758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.703926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.703953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.704093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.704234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.704261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.704394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.704549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.704576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.704767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.704978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.705009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.705168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.705329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.705360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.705554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.705740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.705767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.705926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.706128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.706155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.706323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.706488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.706515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.706698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.706833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.706860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.707030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.707216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.707243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.707422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.707574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.707600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.707808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.707956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.707983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.708152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.708327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.708353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.708550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.708754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.708786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.708935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.709111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.709139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.709291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.709432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.709458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.709630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.709808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.709835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.710007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.710159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.710185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.710328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.710499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.710525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.710699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.710869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.710895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.711070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.711212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.711238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.711387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.711536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.711563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.711743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.711931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.711957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.712129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.712268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.712294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.712463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.712634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.712660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.712862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.713012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.713039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.713206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.713343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.713371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.713578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.713720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.132 [2024-07-27 01:35:06.713747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.132 qpair failed and we were unable to recover it. 00:27:15.132 [2024-07-27 01:35:06.713923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.714070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.714097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.714270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.714419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.714446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.714622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.714761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.714788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.714924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.715064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.715091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.715232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.715369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.715396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.715576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.715747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.715774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.715950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.716093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.716120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.716292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.716459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.716485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.716661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.716794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.716821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.716995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.717161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.717189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.717338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.717478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.717504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.717650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.717786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.717813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.717971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.718121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.718148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.718297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.718465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.718491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.718634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.718817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.718844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.719023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.719169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.719197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.719338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.719483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.719510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.719655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.719823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.719850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.720017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.720188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.720215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.720362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.720533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.720560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.720747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.720916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.720943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.721132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.721303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.721330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.721527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.721664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.721691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.721836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.722031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.722062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.722209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.722370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.722397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.722530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.722697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.722724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.722912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.723078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.723110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.723299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.723484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.723511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.133 qpair failed and we were unable to recover it. 00:27:15.133 [2024-07-27 01:35:06.723685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.723825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.133 [2024-07-27 01:35:06.723851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.724002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.724182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.724209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.724375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.724535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.724561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.724704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.724840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.724867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.725071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.725239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.725266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.725404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.725569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.725596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.725762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.725898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.725924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.726102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.726288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.726315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.726480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.726620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.726646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.726824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.727001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.727028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.727198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.727350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.727377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.727520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.727652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.727678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.727843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.728032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.728063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.728231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.728403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.728437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.728586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.728746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.728773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.728959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.729136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.729164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.729310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.729456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.729484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.729630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.729809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.729835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.730008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.730183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.730210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.730378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.730547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.730573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.730718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.730856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.730882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.731054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.731195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.731222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.731361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.731525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.731551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.731723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.731864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.731891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.732031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.732194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.732221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.732395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.732543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.732569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.732706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.732844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.732871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.733024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.733172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.733199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.733348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.733488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.134 [2024-07-27 01:35:06.733515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.134 qpair failed and we were unable to recover it. 00:27:15.134 [2024-07-27 01:35:06.733719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.733885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.733912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.734051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.734224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.734251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.734423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.734565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.734592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.734765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.734951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.734977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.735147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.735290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.735317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.735474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.735639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.735666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.735833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.735992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.736018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.736167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.736301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.736327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.736519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.736676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.736702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.736872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.737040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.737072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.737210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.737353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.737380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.737523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.737719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.737746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.737915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.738084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.738111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.738262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.738439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.738465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.738612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.738750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.738777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.738919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.739057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.739089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.739267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.739414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.739440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.739619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.739758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.739784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.739929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.740100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.740128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.740272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.740442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.740468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.740642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.740811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.740842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.740978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.741149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.741176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.741329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.741493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.741520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.741727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.741874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.741902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.742082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.742236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.742263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.742417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.742551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.742578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.742746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.742919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.742947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.743115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.743285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.743312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.135 qpair failed and we were unable to recover it. 00:27:15.135 [2024-07-27 01:35:06.743497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.135 [2024-07-27 01:35:06.743666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.743693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.743835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.744005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.744032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.744217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.744407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.744434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.744580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.744722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.744748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.744919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.745088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.745116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.745253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.745391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.745418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.745598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.745732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.745759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.745924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.746065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.746092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.746271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.746426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.746453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.746627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.746765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.746792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.746983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.747150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.747205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.747381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.747550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.747576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.747748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.747908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.747934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.748088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.748245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.748272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.748448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.748586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.748612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.748780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.748953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.748980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.749152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.749312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.749339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.749545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.749685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.749712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.749848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.749997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.750023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.750187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.750356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.750383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.750548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.750695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.750723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.750894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.751085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.751112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.751289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.751456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.751483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.751644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.751843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.751870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.752027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.752177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.136 [2024-07-27 01:35:06.752203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.136 qpair failed and we were unable to recover it. 00:27:15.136 [2024-07-27 01:35:06.752386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.752534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.752561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.752730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.752907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.752933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.753084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.753221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.753248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.753395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.753554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.753581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.753720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.753914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.753941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.754132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.754311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.754339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.754514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.754667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.754693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.754833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.755019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.755045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.755209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.755348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.755379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.755555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.755727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.755753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.755891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.756037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.756068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.756221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.756422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.756449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.756589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.756754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.756780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.756923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.757111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.757139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.757285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.757427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.757453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.757625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.757765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.757792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.757993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.758138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.758165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.758332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.758470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.758497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.758637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.758794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.758825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.758973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.759138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.759165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.759335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.759484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.759510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.759655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.759836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.759863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.760063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.760212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.760238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.760412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.760577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.760603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.137 qpair failed and we were unable to recover it. 00:27:15.137 [2024-07-27 01:35:06.760778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.137 [2024-07-27 01:35:06.760961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.760988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.761190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.761360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.761387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.761522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.761665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.761691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.761866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.762008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.762035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.762230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.762395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.762421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.762601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.762751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.762778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.762919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.763094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.763121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.763260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.763422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.763449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.763588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.763730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.763756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.763957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.764115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.764141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.764316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.764504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.764530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.764703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.764842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.764869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.765097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.765249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.765276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.765421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.765565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.765593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.765734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.765899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.765926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.766097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.766289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.766322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.766480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.766661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.766687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.766859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.766997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.767022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.767178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.767365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.767391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.767565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.767720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.767747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.767945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.768085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.768112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.768288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.768437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.768465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.768637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.768806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.768832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.768999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.769148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.769175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.769332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.769498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.769525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.769702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.769864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.769896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.770045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.770237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.770265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.770439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.770614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.138 [2024-07-27 01:35:06.770642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.138 qpair failed and we were unable to recover it. 00:27:15.138 [2024-07-27 01:35:06.770785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.770997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.771025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.771203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.771354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.771381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.771528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.771699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.771725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.771867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.772079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.772107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.772255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.772400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.772426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.772602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.772786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.772813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.772964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.773116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.773145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.773288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.773500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.773534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.773681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.773866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.773892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.774053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.774214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.774241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.774381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.774530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.774564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.774748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.774921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.774950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.775112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.775261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.775289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.775475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.775654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.775682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.775834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.776009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.776036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.776193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.776380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.776407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.776548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.776730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.776757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.776903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.777089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.777118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.777262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.777440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.777468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.777612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.777758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.777784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.777984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.778168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.778203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.778384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.778529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.778564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.778717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.778874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.778901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.779074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.779217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.779243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.779411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.779549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.779576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.779775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.779953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.779981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.780137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.780312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.780341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.780552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.780730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.139 [2024-07-27 01:35:06.780757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.139 qpair failed and we were unable to recover it. 00:27:15.139 [2024-07-27 01:35:06.780903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.781044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.781080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.781242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.781412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.781438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.781596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.781746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.781773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.781961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.782130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.782159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.782301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.782511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.782537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.782714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.782857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.782883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.783090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.783250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.783284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.783454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.783596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.783623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.783816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.783986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.784014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.784202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.784364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.784391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.784567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.784768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.784805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.784948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.785101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.785129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.785281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.785439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.785465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.785662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.785827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.785854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.786016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.786184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.786213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.786383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.786560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.786586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.786726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.786903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.786930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.787127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.787283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.787310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.787492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.787644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.787670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.787851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.788007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.788040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.788212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.788401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.788428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.788584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.788771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.788798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.788957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.789133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.789165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.789326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.789482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.789509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.789668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.789811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.789837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.790051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.790207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.790240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.790412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.790572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.790605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.790764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.790923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.790951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.140 qpair failed and we were unable to recover it. 00:27:15.140 [2024-07-27 01:35:06.791152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.140 [2024-07-27 01:35:06.791331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.791360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.791557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.791743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.791778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.791936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.792092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.792124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.792301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.792472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.792498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.792650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.792815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.792844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.793018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.793194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.793222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.793368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.793548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.793575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.793747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.793890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.793918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.794097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.794242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.794268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.794443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.794620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.794648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.794824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.794966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.794992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.795150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.795292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.795327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.795500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.795646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.795674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.795818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.795971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.795998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.796175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.796318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.796346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.796522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.796658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.796684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.796831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.797032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.797064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.797212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.797373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.797400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.797613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.797804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.797831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.797987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.798156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.798184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.798324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.798513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.798540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.798701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.798876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.798904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.799053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.799206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.799232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.799385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.799555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.799582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.141 qpair failed and we were unable to recover it. 00:27:15.141 [2024-07-27 01:35:06.799760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.141 [2024-07-27 01:35:06.799905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.799934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.800102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.800271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.800297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.800446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.800590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.800616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.800759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.800953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.800979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.801149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.801317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.801344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.801513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.801656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.801683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.801879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.802064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.802092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.802265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.802425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.802452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.802642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.802790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.802817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.803015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.803229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.803257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.803436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.803608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.803634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.803808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.803996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.804022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.804205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.804342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.804368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.804560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.804701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.804728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.804885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.805028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.805054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.805204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.805362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.805389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.805563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.805701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.805730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.805888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.806067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.806094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.806253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.806398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.806425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.806594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.806766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.806792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.806954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.807153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.807180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.807348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.807490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.807516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.807693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.807836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.807862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.808007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.808184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.808211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.808402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.808576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.808605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.808801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.808966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.808992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.809170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.809343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.809370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.809546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.809717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.809744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.142 qpair failed and we were unable to recover it. 00:27:15.142 [2024-07-27 01:35:06.809893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.142 [2024-07-27 01:35:06.810067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.810095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.810262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.810432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.810460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.810605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.810750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.810777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.810919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.811119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.811146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.811298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.811467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.811495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.811644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.811838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.811864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.812042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.812243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.812271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.812449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.812608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.812635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.812837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.813018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.813044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.813221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.813367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.813393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.813539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.813704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.813734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.813900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.814036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.814067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.814213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.814365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.814391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.814531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.814699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.814726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.814860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.815005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.815032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.815225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.815400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.815427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.815565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.815742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.815769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.815947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.816090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.816118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.816272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.816445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.816471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.816643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.816790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.816816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.817017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.817163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.817191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.817357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.817554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.817580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.817758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.817916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.817944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.818121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.818272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.818299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.818485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.818636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.818663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.818806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.818941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.818966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.819115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.819256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.819283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.819487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.819653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.819679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-07-27 01:35:06.819858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.819991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-07-27 01:35:06.820016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.820174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.820338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.820364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.820518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.820656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.820684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.820884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.821032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.821066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.821240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.821384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.821411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.821555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.821690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.821717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.821858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.822022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.822048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.822195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.822364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.822389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.822537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.822682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.822708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.822848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.822985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.823011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.823180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.823346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.823372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.823536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.823705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.823731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.823905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.824069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.824096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.824254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.824427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.824453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.824624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.824779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.824805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.824974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.825140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.825167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.825342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.825490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.825517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.825659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.825796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.825823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.825962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.826106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.826132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.826278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.826433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.826459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.826627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.826783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.826809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.826953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.827125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.827152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.827325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.827496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.827523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.827666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.827834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.827860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.828021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.828184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.828211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.828379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.828549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.828575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.828714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.828860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.828888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.829064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.829203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.829229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-07-27 01:35:06.829377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.829515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-07-27 01:35:06.829542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.829711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.829878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.829904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.830042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.830189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.830215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.830384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.830538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.830565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.830752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.830945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.830972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.831179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.831330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.831356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.831495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.831632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.831659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.831800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.831940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.831967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.832124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.832297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.832323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.832499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.832640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.832666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.832806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.832952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.832980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.833151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.833332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.833359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.833535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.833702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.833727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.833865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.834052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.834083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.834254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.834429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.834456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.834625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.834812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.834838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.835009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.835184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.835211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.835384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.835555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.835581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.835751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.835902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.835930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.836077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.836221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.836248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.836405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.836606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.836632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-07-27 01:35:06.836799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.836963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-07-27 01:35:06.836988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.837134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.837276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.837302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.837446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.837597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.837624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.837760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.837933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.837959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.838133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.838325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.838351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.838493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.838651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.838677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.838875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.839043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.839088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.839231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.839377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.839403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.839590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.839738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.839765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.839936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.840101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.840129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.840265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.840435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.840461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.840605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.840761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.840788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.840954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.841118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.841144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.841306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.841478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.841510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.841670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.841827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.841852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.842040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.842216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.842242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.842417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.842552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.842578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.842722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.842893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.842920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.843081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.843239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.843266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.843439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.843580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.843606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.843751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.843905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.843932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.844072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.844211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.844237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.844404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.844561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.844587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.844735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.844906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.844936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.845114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.845276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.845303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.845476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.845629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.845655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.845826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.845972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.845998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-07-27 01:35:06.846161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.846361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-07-27 01:35:06.846387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.846523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.846692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.846718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.846867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.847002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.847028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.847206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.847376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.847402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.847565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.847707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.847735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.847915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.848050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.848083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.848233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.848400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.848431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.848608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.848776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.848802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.848968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.849110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.849137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.849308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.849473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.849499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.849685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.849825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.849851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.849994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.850137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.850164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.850327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.850491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.850516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.850690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.850863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.850890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.851034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.851213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.851241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.851412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.851580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.851606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.851796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.851964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.851994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.852167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.852312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.852338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.852512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.852680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.852706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.852849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.853033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.853066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.853212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.853380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.853406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.853554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.853731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.853759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.853905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.854104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.854131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.854281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.854421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.854449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.854624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.854792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.854819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.854991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.855162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.855189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.855358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.855501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.855527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.855704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.855870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.855896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.856041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.856212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-07-27 01:35:06.856239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-07-27 01:35:06.856387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.856561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.856587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-07-27 01:35:06.856771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.856910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.856936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-07-27 01:35:06.857076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.857227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.857254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-07-27 01:35:06.857411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.857552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.857579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-07-27 01:35:06.857751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.857926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.857952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-07-27 01:35:06.858107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.858279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-07-27 01:35:06.858306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.858463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.858605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.858632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.858826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.858985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.859011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.859173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.859314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.859340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.859481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.859645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.859670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.859843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.859987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.860015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.860196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.860338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.860364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.860538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.860693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.860719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.860866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.861005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.861031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.861201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.861356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.861382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.861544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.861686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.861714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.861914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.862074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.862101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.862252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.862398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.862426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.862572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.862710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.862736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.862913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.863056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.863107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.863247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.863412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.863438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.863610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.863758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.863786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-07-27 01:35:06.863988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-07-27 01:35:06.864163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.864190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.864393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.864542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.864569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.864710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.864846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.864873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.865039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.865192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.865219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.865365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.865536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.865562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.865743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.865911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.865937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.866089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.866281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.866308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.866505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.866644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.866672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.866837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.866977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.867003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.867178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.867321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.867347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.867533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.867703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.867729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.867897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.868035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.868066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.868262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.868428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.868454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.868624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.868767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.868793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.868959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.869169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.869196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.869341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.869496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.869523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.869691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.869862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.869888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.870065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.870210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.870237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.870409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.870576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.870603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.870777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.870945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.870972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.871146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.871318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.871345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.871490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.871656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.871683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.871839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.872011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.872038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.872190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.872363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.872390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.872528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.872702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.872732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.872891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.873037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.873071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.873237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.873404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.873431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-07-27 01:35:06.873582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.873729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-07-27 01:35:06.873756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.873913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.874090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.874118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.874267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.874411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.874439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.874591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.874738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.874768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.874938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.875113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.875142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.875291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.875431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.875458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.875631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.875792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.875819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.875980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.876137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.876163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.876307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.876495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.876522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.876669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.876842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.876869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.877047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.877206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.877232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.877375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.877544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.877571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.877746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.877905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.877934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.878109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.878280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.878307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.878482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.878653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.878681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.878846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.879001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.879028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.879215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.879353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.879380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.879520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.879688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.879715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.879875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.880071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.880098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.880277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.880422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.880449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.880611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.880751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.880780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.880946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.881100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.881128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.881304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.881479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.881506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.881657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.881829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.881855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.881996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.882157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.882184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.882325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.882530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.882557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.882708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.882849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.882876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.883039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.883254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-07-27 01:35:06.883281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-07-27 01:35:06.883445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.883631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.883657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.883829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.884005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.884032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.884237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.884381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.884408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.884550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.884689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.884718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.884873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.885030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.885066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.885257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.885420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.885447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.885602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.885743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.885769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.885924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.886074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.886102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.886239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.886390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.886418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.886560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.886739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.886766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.886913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.887055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.887098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.887251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.887428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.887456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.887653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.887798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.887825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.887996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.888172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.888199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.888343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.888505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.888532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.888701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.888842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.888868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.889036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.889215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.889241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.889396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.889557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.889583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.889728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.889903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.889930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.890106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.890246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.890272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.890444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.890613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.890639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.890784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.890958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.890984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-07-27 01:35:06.891158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-07-27 01:35:06.891305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.891333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.891491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.891644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.891671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.891829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.892030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.892056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.892204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.892341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.892368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.892521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.892661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.892689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.892849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.892987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.893015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.893208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.893348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.893374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.893547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.893719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.893745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.893917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.894087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.894115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.894283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.894435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.894463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.894632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.894801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.894827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.894968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.895135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.895162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.895327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.895492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.895518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.895716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.895902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.895929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.896071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.896232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.896259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.896436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.896604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.896631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.896808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.896953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.896981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.897153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.897323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.897349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.897511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.897711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.897737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.897882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.898027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.898057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-07-27 01:35:06.898208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.898357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-07-27 01:35:06.898382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.898528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.898700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.898725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.898913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.899080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.899107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.899278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.899450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.899475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.899657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.899793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.899819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.899956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.900100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.900129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.900309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.900479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.900504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.900673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.900845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.900871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.901073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.901248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.901276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.901419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.901591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.901621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.901758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.901923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.901950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.902122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.902263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.902290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.902466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.902602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.902628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.902774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.902942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.902968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.903126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.903289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.903315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.903473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.903643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.903669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-07-27 01:35:06.903852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.904031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-07-27 01:35:06.904056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.904205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.904401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.904427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.904570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.904725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.904751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.904906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.905063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.905094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.905268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.905432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.905458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.905634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.905770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.905796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.905935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.906109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.906136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.906296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.906451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.906477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.906649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.906813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.906839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.906989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.907128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.907154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.907336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.907500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.907526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.907673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.907858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.907884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.908031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.908211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.908238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.908386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.908524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.908554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.908701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.908878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.908905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.909054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.909227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.909253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.909421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.909593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.909619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.909789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.909960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.909987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.910133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.910314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.910340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.910475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.910616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.910642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.910782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.910949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.910975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-07-27 01:35:06.911148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.911295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-07-27 01:35:06.911322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.911495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.911664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.911692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.911840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.912002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.912028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.912205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.912346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.912372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.912540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.912715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.912742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.912879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.913012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.913039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.913187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.913359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.913387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.913552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.913730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.913757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.913921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.914109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.914136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.914295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.914455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.914481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.914653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.914792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.914818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.914993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.915137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.915164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.915313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.915459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.915486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.915662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.915804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.915831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.916005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.916145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.916172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.916346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.916511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.916537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.916709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.916845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.916871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.917069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.917235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.917261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.917435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.917608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.917635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.917791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.917949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.917975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.918165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.918306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.918332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.918506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.918663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.918690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.918888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.919065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.919093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.919248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.919392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.919418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.919555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.919747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.919773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.919910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.920084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-07-27 01:35:06.920111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-07-27 01:35:06.920286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.920427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.920453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.920625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.920781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.920808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.921010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.921155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.921184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.921350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.921519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.921545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.921722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.921868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.921894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.922069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.922225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.922251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.922424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.922578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.922604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.922766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.922943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.922969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.923171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.923316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.923342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.923512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.923683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.923709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.923855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.924020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.924046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.924228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.924413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.924440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.924600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.924771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.924797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.924968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.925114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.925141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.925280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.925422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.925447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.925619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.925761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.925787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.925965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.926117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.926145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.926317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.926489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.926515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.926681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.926881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.926907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.927044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.927244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.927274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.927479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.927658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.927685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.927858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.928063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.928089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.928226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.928385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.928411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.928561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.928709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.928735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.928906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.929053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.929088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.929244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.929443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.929470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.929615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.929787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-07-27 01:35:06.929814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-07-27 01:35:06.930011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.930178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.930205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.930376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.930545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.930572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.930745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.930888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.930915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.931115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.931270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.931296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.931469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.931614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.931640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.931834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.931995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.932021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.932172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.932339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.932365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.932511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.932711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.932737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.932910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.933081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.933108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.933254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.933454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.933481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.933664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.933813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.933839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.933980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.934117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.934144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.934311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.934481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.934508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.934651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.934791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.934817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.934997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.935142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.935171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.935333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.935476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.935502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.935689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.935860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.935886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.936057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.936273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.936300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.936446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.936583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.936609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.936791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.936931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.936957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.937106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.937275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.937302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.937460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.937636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.937663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.937840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.937975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.938000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-07-27 01:35:06.938147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-07-27 01:35:06.938316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.938342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.938515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.938664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.938691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.938829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.939002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.939029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.939204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.939374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.939401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.939571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.939715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.939743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.939898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.940098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.940126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.940273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.940424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.940452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.940595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.940790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.940816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.940988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.941146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.941173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.941314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.941484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.941510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.941649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.941809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.941837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.941992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.942156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.942183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.942358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.942513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.942539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.942706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.942876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.942902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.943080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.943267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.943294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.943467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.943609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.943637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.943785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.943923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.943950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.944102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.944256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.944282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.944456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.944624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.944650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.944796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.944963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.944989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.945180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.945360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-07-27 01:35:06.945388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-07-27 01:35:06.945574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.945737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.945764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.945911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.946080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.946107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.946246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.946393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.946420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.946591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.946729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.946755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.946932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.947104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.947130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.947289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.947448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.947474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.947635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.947809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.947837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.948036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.948211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.948238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.948382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.948556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.948582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.948753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.948895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.948920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.949084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.949290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.949317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.949457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.949616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.949642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.949839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.949978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.950005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.950161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.950300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.950327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.950493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.950626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.950652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.950855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.951032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.951077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.951270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.951440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.951470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.951643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.951817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.951843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.951990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.952146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.952173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.952413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.952601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.952628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.952765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.952940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.952967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.953117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.953267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.953293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-07-27 01:35:06.953470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-07-27 01:35:06.953644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.953670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.953840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.954009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.954034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.954216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.954378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.954404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.954576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.954745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.954771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.954921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.955086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.955118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.955265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.955433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.955460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.955607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.955746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.955774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.955915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.956064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.956093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.956268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.956437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.956463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.956636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.956779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.956805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.956975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.957132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.957159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.957329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.957520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.957546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.957732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.957915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.957942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.958114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.958261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.958287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.958476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.958621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.958651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.958791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.958963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.958991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.959152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.959324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.959350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.959513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.959688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.959716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.959886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.960087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.960114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.960289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.960438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.960467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.960624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.960804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.960830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-07-27 01:35:06.960968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.961116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-07-27 01:35:06.961143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.961293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.961429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.961456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.961637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.961801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.961828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.962027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.962185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.962219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.962394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.962542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.962571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.962717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.962854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.962880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.963052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.963230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.963257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.963428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.963574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.963602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.963775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.963914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.963941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.964090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.964266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.964293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.964432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.964588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.964614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.964756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.964928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.964955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.965125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.965267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.965293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.965468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.965639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.965665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.965872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.966013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.966039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.966219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.966385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.966412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.966580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.966753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.966779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.966939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.967091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.967119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.967259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.967454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.967480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.967616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.967808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.967835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.967972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.968121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.968148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.968319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.968459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.968485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-07-27 01:35:06.968634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.968832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-07-27 01:35:06.968858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.969007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.969160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.969189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.969363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.969510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.969538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.969709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.969883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.969911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.970085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.970254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.970281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.970469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.970634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.970660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.970808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.970992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.971018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.971199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.971342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.971369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.971507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.971649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.971677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.971859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.972066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.972094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.972231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.972369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.972396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.972563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.972721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.972748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.972943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.973111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.973140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.973342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.973523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.973549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.973694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.973834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.973861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.974066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.974242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.974268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.974453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.974623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.974649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.974794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.974962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.974990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.975162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.975321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.975347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.975521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.975662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.975689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.975869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.976070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.976096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-07-27 01:35:06.976234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-07-27 01:35:06.976381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.976409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.976587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.976755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.976782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.976941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.977108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.977136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.977305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.977446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.977472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.977648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.977790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.977816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.977990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.978147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.978174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.978312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.978477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.978503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.978650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.978823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.978849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.978995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.979160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.979187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.979335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.979502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.979529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.979695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.979833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.979860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.980009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.980153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.980180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.980338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.980505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.980531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.980731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.980882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.980908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.981086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.981255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.981281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.981462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.981633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.981660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.981828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.981990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.982018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.982196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.982340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.982367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.982518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.982689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.982715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.982857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.983023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.983049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-07-27 01:35:06.983253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.983420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-07-27 01:35:06.983446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.983607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.983752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.983780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.983953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.984125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.984152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.984304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.984458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.984486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.984690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.984863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.984889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.985078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.985255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.985282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.985425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.985610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.985636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.985785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.985976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.986003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.986203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.986347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.986374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.986514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.986652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.986678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.986845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.987003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.987029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.987178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.987344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.987371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.987540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.987710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.987736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.987905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.988053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.988085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.988284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.988471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.988497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.988666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.988866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.988893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.989091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.989268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.989295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.989444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.989617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.989644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.989812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.989983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.990010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.990184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.990352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.990379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.990554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.990719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-07-27 01:35:06.990746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-07-27 01:35:06.990920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.991100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.991130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.991329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.991500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.991526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.991661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.991806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.991833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.991999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.992170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.992196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.992337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.992501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.992527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.992686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.992864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.992892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.993069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.993240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.993266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.993433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.993576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.993602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.993765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.993949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.993975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.994133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.994321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.994347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.994512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.994702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.994728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.994877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.995018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.995044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.995193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.995390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.995415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.995585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.995753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.995778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.995922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.996092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.996119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.996306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.996459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-07-27 01:35:06.996485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-07-27 01:35:06.996625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.996778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.996805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:06.996977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.997138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.997165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:06.997354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.997520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.997546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:06.997719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.997890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.997915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:06.998068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.998206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.998232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:06.998377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.998561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.998588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:06.998750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.998910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.998936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:06.999112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.999261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.999287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:06.999464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.999631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.999657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:06.999828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.999972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:06.999998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.000145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.000301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.000327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.000466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.000633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.000659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.000815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.000952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.000978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.001154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.001291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.001317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.001468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.001634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.001661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.001799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.001972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.001999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.002196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.002364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.002390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.002597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.002738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.002764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.002912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.003049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.003081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.003223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.003394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.003420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.003588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.003729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.003756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.003909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.004046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.004080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.004231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.004418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.004444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.004593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.004739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.004765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.004907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.005079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.005107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.005284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.005449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.005474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.005616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.005801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.005826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-07-27 01:35:07.005966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.006110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-07-27 01:35:07.006138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.006311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.006489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.006515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.006685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.006852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.006877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.007027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.007171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.007197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.007370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.007535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.007561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.007725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.007883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.007909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.008084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.008232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.008257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.008459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.008630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.008659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.008821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.008963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.008989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.009134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.009282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.009310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.009457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.009628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.009655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.009796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.009938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.009964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.010122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.010276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.010302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.010477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.010636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.010663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.010820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.010983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.011009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.011159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.011304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.011330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.011502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.011645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.011671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.011834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.012001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.012034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.012182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.012352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.012378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.012547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.012715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.012740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.012883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.013054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.013098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-07-27 01:35:07.013258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.013442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-07-27 01:35:07.013468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.013634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.013802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.013828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.014019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.014179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.014205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.014367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.014504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.014531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.014730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.014874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.014900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.015039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.015187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.015213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.015349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.015519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.015549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.015719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.015878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.015904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.016054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.016260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.016286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.016448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.016621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.016648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.016786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.016926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.016952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.017104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.017248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.017274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.017442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.017584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.017612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.017752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.017915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.017941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.018120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.018276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.018302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.018477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.018651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.018677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.018875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.019016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.019046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.019229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.019393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.019419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.019618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.019804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.019831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.020001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.020140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.020166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.020312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.020458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.020486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.020664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.020842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.020868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.021019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.021193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.021219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-07-27 01:35:07.021360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.021534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-07-27 01:35:07.021561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.021718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.021875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.021902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.022044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.022232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.022259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.022427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.022582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.022609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.022790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.022937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.022963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.023150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.023319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.023345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.023492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.023654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.023681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.023827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.023990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.024015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.024168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.024339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.024365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.024505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.024643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.024670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.024812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.024978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.025004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.025194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.025335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.025361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.025509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.025680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.025707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.025880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.026056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.026086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.026265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.026466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.026492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.026643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.026800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.026826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.026992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.027149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.027176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.027356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.027508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.027534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.027690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.027888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.027914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.028063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.028202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.028228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-07-27 01:35:07.028398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.028566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-07-27 01:35:07.028593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.028762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.028959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.028985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.029159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.029306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.029332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.029505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.029642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.029668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.029827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.029994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.030020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.030196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.030374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.030400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.030540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.030706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.030732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.030929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.031106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.031133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.031274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.031408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.031434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.031613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.031789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.031816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.032015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.032163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.032189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.032354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.032501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.032527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.032677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.032826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.032852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.032993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.033168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.033194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.033347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.033517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.033544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.033692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.033859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.033886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.034031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.034198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.034225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.034400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.034542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.034569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.034739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.034895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.034921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.035119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.035259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.035285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.035421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.035589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.035616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.035783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.035920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-07-27 01:35:07.035946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-07-27 01:35:07.036092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.036261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.036287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.036431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.036585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.036612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.036792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.036964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.036991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.037144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.037285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.037312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.037485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.037656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.037682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.037825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.037965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.037991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.038164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.038310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.038336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.038514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.038690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.038716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.038887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.039065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.039092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.039248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.039453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.039480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.039653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.039826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.039853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.040027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.040196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.040223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.040370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.040541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.040567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.040740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.040889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.040915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.041092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.041237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.041264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.041466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.041641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.041668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.041837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.042002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.042029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.042197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.042369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.042395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.042545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.042686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.042712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-07-27 01:35:07.042867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-07-27 01:35:07.043024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.043050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.043230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.043376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.043402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.043543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.043701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.043727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.043890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.044066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.044093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.044292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.044429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.044455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.044664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.044841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.044867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.045009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.045162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.045188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.045337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.045480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.045506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.045706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.045847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.045873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.046018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.046188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.046215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.046352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.046506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.046532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.046730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.046869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.046895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.047069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.047205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.047231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.047385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.047544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.047571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.047761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.047929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.047954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.048105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.048282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.048308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.048453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.048617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.048642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.048823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.048993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.049018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.049172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.049319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.049345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.049514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.049675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.049701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.049903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.050056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.050094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.050250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.050396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.050422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.050565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.050711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.050737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.050902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.051085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.051112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.051260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.051424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.446 [2024-07-27 01:35:07.051449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.446 qpair failed and we were unable to recover it. 00:27:15.446 [2024-07-27 01:35:07.051592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.051738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.051764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.051934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.052083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.052110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.052298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.052487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.052513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.052681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.052820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.052846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.053015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.053168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.053194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.053350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.053489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.053515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.053671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.053835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.053860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.054029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.054207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.054233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.054400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.054573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.054599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.054746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.054882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.054907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.055051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.055231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.055257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.055393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.055560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.055585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.055757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.055912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.055938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.056123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.056288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.056314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.056450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.056621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.056647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.056790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.056989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.057015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.057166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.057364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.057391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.057532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.057665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.057691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.057829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.058010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.058036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.058241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.058431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.058461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.058636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.058804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.058831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.447 [2024-07-27 01:35:07.058974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.059123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.447 [2024-07-27 01:35:07.059163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.447 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.059340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.059496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.059524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.059694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.059854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.059881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.060045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.060206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.060232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.060374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.060550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.060576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.060746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.060881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.060907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.061086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.061258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.061284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.061444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.061610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.061637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.061806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.061954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.061982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.062155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.062351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.062378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.062566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.062709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.062737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.062909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.063077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.063104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.063265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.063421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.063447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.063616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.063752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.063779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.063978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.064130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.064157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.064300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.064444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.064470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.064606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.064749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.064774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.064911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.065052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.065090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.065238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.065416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.065443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.065580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.065744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.065770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.065943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.066084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.066112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.066258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.066400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.066427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.066627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.066788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.066814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.066954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.067122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.067149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.067315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.067455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.067481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.448 [2024-07-27 01:35:07.067688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.067854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.448 [2024-07-27 01:35:07.067880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.448 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.068063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.068203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.068229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.068380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.068521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.068552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.068738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.068940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.068965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.069128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.069264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.069290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.069453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.069616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.069643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.069788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.069962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.069989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.070171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.070312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.070338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.070476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.070624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.070650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.070789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.070986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.071012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.071197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.071356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.071383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.071539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.071680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.071706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.071884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.072073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.072105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.072247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.072423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.072449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.072592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.072738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.072766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.072935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.073098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.073125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.073295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.073454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.073480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.073643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.073844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.073871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.074018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.074216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.074243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.074414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.074588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.074615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.074803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.074973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.074999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.075169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.075331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.075359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.075526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.075681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.075711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.075881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.076026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.076053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.076246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.076395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.076421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.076560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.076737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.076765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.076902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.077047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.077079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.077254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.077396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.077422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.449 qpair failed and we were unable to recover it. 00:27:15.449 [2024-07-27 01:35:07.077571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.449 [2024-07-27 01:35:07.077740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.077766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.077932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.078103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.078129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.078301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.078450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.078476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.078644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.078808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.078834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.079005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.079158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.079185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.079354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.079526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.079553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.079726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.079864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.079890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.080025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.080172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.080200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.080347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.080518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.080545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.080698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.080841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.080867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.081038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.081212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.081240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.081402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.081563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.081590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.081767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.081928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.081954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.082106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.082280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.082306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.082444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.082613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.082639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.082831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.083007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.083033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.083223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.083385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.083412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.083573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.083709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.083735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.450 [2024-07-27 01:35:07.083879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.084057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.450 [2024-07-27 01:35:07.084091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.450 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.084246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.084416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.084442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.084588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.084754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.084781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.084926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.085099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.085127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.085268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.085432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.085459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.085630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.085784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.085810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.085951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.086129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.086156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.086330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.086583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.086609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.086809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.086954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.086981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.087154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.087350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.087376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.087530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.087678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.087706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.087906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.088071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.088098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.088271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.088413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.088439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.088613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.088812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.088839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.089016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.089191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.089218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.089380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.089537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.089563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.089727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.089871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.089899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.090081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.090257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.090284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.090536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.090699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.090725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.090897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.091033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.091065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.091216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.091387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.091413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.091583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.091721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.451 [2024-07-27 01:35:07.091747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.451 qpair failed and we were unable to recover it. 00:27:15.451 [2024-07-27 01:35:07.091922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.092064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.092091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.092266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.092411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.092439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.092610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.092786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.092812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.092953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.093122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.093148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.093326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.093523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.093549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.093698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.093866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.093892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.094036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.094208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.094235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.094423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.094591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.094618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.094779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.094950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.094978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.095142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.095313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.095340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.095531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.095673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.095699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.095873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.096044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.096076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.096230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.096378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.096406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.096542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.096677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.096703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.096868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.097028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.097055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.097226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.097427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.097453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.097598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.097758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.097784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.097941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.098116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.098143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.098321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.098479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.098506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.098665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.098834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.098861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.452 qpair failed and we were unable to recover it. 00:27:15.452 [2024-07-27 01:35:07.099029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.099178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.452 [2024-07-27 01:35:07.099207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.099355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.099540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.099566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.099736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.099870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.099897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.100055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.100232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.100258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.100424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.100624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.100650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.100830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.101007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.101034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.101186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.101321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.101347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.101519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.101685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.101710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.101881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.102050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.102085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.102239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.102406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.102432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.102603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.102757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.102784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.102952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.103149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.103176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.103319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.103519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.103545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.103705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.103847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.103872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.104070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.104236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.104262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.104434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.104591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.104617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.104770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.104929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.104956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.105117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.105282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.105308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.105483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.105654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.105681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.105876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.106048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.106081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.106231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.106403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.106429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.106587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.106745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.106770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.106937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.107107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.107133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.107291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.107461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.107487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.107630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.107817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.107843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.107992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.108139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.108166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.453 [2024-07-27 01:35:07.108307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.108465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.453 [2024-07-27 01:35:07.108490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.453 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.108663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.108798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.108824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.108965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.109134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.109160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.109296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.109437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.109462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.109609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.109780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.109806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.109979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.110195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.110221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.110385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.110534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.110561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.110721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.110861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.110888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.111049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.111253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.111279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.111423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.111603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.111631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.111800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.111945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.111973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.112140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.112290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.112316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.112457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.112609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.112637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.112803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.112998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.113025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.113178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.113359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.113385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.113565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.113721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.113747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.113918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.114067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.114094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.114294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.114495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.114521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.114691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.114857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.114883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.115052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.115235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.115261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.115406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.115565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.115591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.115764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.115908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.115934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.116123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.116288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.116314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.116478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.116644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.116670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.116813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.116980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.117007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.117153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.117322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.117349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.117495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.117665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.117691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.117827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.117958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.117984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.454 [2024-07-27 01:35:07.118131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.118287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.454 [2024-07-27 01:35:07.118314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.454 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.118458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.118612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.118641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.118788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.118945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.118972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.119169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.119326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.119357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.119498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.119681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.119717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.119876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.120049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.120083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.120255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.120393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.120421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.120592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.120748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.120774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.120944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.121124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.121153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.121293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.121451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.121480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.121628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.121843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.121871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.122019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.122196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.122228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.122397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.122549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.122575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.122737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.122906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.122932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.123093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.123236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.123264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.123448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.123622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.123648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.123824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.123970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.123995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.124143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.124297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.124324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.124529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.124690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.124716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.124894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.125054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.125086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.125237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.125425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.125452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.125599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.125757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.125794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.125986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.126142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.126169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.126359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.126502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.126539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.126753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.126895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.126923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.127134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.127276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.127302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.127479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.127622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.127650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.455 qpair failed and we were unable to recover it. 00:27:15.455 [2024-07-27 01:35:07.127789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.127927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.455 [2024-07-27 01:35:07.127953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.128134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.128279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.128307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.128456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.128620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.128646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.128837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.128980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.129007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.129161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.129320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.129361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.129540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.129698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.129724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.129901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.130092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.130119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.130295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.130436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.130466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.130661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.130819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.130845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.131010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.131155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.131182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.131333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.131503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.131531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.131686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.131857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.131883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.132029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.132188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.132214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.132352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.132518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.132544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.132691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.132863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.132890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.133057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.133209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.133247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.133420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.133558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.133584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.133796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.133945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.133982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.134150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.134351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.134376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.134524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.134701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.134726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.134894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.135055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.135087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.135258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.135405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.135439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.135582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.135747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.135773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.135943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.136089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.136117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.136262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.136428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.136454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.456 qpair failed and we were unable to recover it. 00:27:15.456 [2024-07-27 01:35:07.136631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.456 [2024-07-27 01:35:07.136784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.136812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.136974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.137161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.137187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.137374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.137515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.137542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.137681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.137852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.137879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.138051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.138201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.138228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.138399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.138566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.138591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.138769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.138934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.138960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.139097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.139272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.139297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.139460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.139621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.139646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.139810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.139984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.140011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.140204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.140346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.140373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.140529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.140696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.140722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.140865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.141022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.141049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.141205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.141351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.141377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.141546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.141680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.141706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.141873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.142040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.142078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.142224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.142359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.142397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.142584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.142721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.142747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.142901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.143053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.143087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.143244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.143401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.143427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.143584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.143740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.143766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.143938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.144091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.144127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.144333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.144472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.144497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.144648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.144820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.144846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.145013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.145211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.145238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.145388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.145558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.145583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.145751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.145893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.145921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.146110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.146254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.146280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.457 qpair failed and we were unable to recover it. 00:27:15.457 [2024-07-27 01:35:07.146458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.457 [2024-07-27 01:35:07.146614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.146640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.146809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.146974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.146999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.147155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.147330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.147356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.147520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.147676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.147701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.147873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.148009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.148036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.148216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.148361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.148388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.148546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.148712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.148738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.148880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.149033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.149066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.149213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.149387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.149414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.149562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.149757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.149792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.149992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.150131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.150158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.150304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.150470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.150496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.150640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.150830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.150856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.151012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.151153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.151180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.151329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.151490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.151521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.151675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.151824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.151850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.152011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.152189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.152216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.152379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.152517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.152543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.152720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.152889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.152914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.153090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.153228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.153253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.153396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.153540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.153567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.153735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.153896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.153922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.154079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.154221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.154247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.154426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.154576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.154602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.154750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.154923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.154949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.458 [2024-07-27 01:35:07.155093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.155265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.458 [2024-07-27 01:35:07.155291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.458 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.155468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.155615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.155642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.155787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.155930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.155957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.156129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.156266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.156291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.156467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.156638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.156663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.156839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.156992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.157017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.157192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.157328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.157353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.157556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.157693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.157719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.157880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.158025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.158076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.158229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.158374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.158400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.158562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.158702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.158728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.158884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.159047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.159082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.159274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.159424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.159450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.159626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.159793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.159819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.159967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.160142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.160168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.160319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.160487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.160513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.160681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.160873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.160899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.161064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.161273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.161299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.161472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.161635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.161661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.161807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.161961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.161987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.162163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.162305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.162347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.162537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.162737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.162764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.162958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.163137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.459 [2024-07-27 01:35:07.163178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.459 qpair failed and we were unable to recover it. 00:27:15.459 [2024-07-27 01:35:07.163320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.163461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.163487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.163644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.163823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.163861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.164022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.164188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.164214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.164384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.164584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.164611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.164765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.164922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.164949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.165124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.165284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.165312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.165469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.165610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.165635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.165817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.165993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.166020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.166206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.166351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.166379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.166526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.166701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.166727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.166911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.167080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.167116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.167282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.167424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.167449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.167620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.167771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.167796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.167976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.168134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.168159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.168351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.168515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.168542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.168718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.168890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.168916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.169103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.169279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.169312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.169479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.169630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.169663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.734 qpair failed and we were unable to recover it. 00:27:15.734 [2024-07-27 01:35:07.169838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.734 [2024-07-27 01:35:07.170009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.170034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.170228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.170425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.170454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.170609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.170781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.170807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.170973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.171150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.171177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.171325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.171465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.171490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.171646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.171799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.171824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.172018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.172188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.172215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.172385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.172592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.172617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.172793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.172968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.172993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.173165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.173337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.173363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.173541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.173679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.173704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.173874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.174037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.174069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.174246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.174424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.174450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.174626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.174795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.174820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.174967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.175151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.175177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.175343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.175523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.175549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.175750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.175895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.175929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.176132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.176320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.176345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.176505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.176652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.176678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.176877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.177052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.177083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.177254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.177430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.177466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.177646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.177807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.177832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.178002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.178156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.178181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.178354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.178523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.178548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.178700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.178897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.178923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.179065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.179209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.179234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.179382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.179551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.179581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.179786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.179946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.179971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.735 qpair failed and we were unable to recover it. 00:27:15.735 [2024-07-27 01:35:07.180113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.735 [2024-07-27 01:35:07.180283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.180309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.180482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.180635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.180660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.180833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.181035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.181065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.181241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.181416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.181441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.181615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.181782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.181807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.181947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.182112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.182138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.182309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.182489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.182514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.182656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.182824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.182849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.182988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.183163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.183193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.183331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.183517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.183544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.183719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.183856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.183881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.184071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.184211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.184236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.184395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.184561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.184586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.184757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.184927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.184953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.185143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.185292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.185317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.185469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.185669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.185694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.185839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.186012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.186038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.186183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.186321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.186346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.186487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.186628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.186657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.186801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.186999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.187024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.187198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.187360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.187385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.187531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.187713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.187739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.187913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.188056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.188088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.188265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.188438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.188464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.188610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.188761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.188786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.188957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.189110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.189136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.189306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.189447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.189472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.736 qpair failed and we were unable to recover it. 00:27:15.736 [2024-07-27 01:35:07.189644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.736 [2024-07-27 01:35:07.189787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.189814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.189960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.190102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.190127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.190298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.190444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.190471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.190611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.190781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.190807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.190951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.191091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.191117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.191258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.191403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.191430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.191576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.191742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.191767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.191908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.192076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.192102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.192279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.192434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.192460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.192632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.192771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.192798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.192936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.193094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.193121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.193309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.193483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.193509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.193658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.193819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.193846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.193991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.194146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.194172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.194363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.194522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.194547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.194711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.194882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.194908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.195073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.195243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.195269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.195423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.195563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.195590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.195759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.195899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.195924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.196119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.196277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.196303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.196468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.196611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.196638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.196794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.196966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.196992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.197198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.197343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.197368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.197542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.197742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.197767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.197915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.198055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.198085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.198260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.198399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.198424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.198591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.198728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.198753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.198895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.199036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.199067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.737 [2024-07-27 01:35:07.199235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.199407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.737 [2024-07-27 01:35:07.199432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.737 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.199582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.199740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.199765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.199928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.200098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.200124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.200274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.200427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.200452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.200632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.200831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.200857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.201009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.201155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.201183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.201361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.201497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.201522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.201700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.201850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.201878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.202064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.202207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.202232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.202379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.202523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.202549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.202698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.202851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.202877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.203056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.203204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.203231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.203400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.203564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.203589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.203732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.203889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.203915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.204064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.204240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.204266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.204424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.204554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.204579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.204728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.204888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.204913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.205064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.205229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.205254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.205428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.205604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.205629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.205803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.205973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.205999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.206185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.206321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.206347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.206518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.206689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.206714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.206855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.206996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.207022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.207198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.207368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.207394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.207560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.207716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.207741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.207907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.208094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.208120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.738 qpair failed and we were unable to recover it. 00:27:15.738 [2024-07-27 01:35:07.208292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.208432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.738 [2024-07-27 01:35:07.208458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.208629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.208795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.208820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.208995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.209150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.209176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.209341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.209509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.209534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.209677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.209836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.209862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.210034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.210239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.210265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.210410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.210575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.210600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.210746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.210886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.210911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.211095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.211270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.211297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.211497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.211644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.211669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.211840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.211977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.212002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.212194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.212332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.212357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.212536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.212712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.212738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.212884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.213073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.213100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.213291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.213459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.213485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.213643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.213788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.213813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.213959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.214136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.214163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.214337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.214479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.214505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.214677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.214852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.214879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.215043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.215197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.215223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.215400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.215561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.215586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.215754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.215928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.215955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.216138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.216338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.216364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.216580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.216751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.216777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.739 qpair failed and we were unable to recover it. 00:27:15.739 [2024-07-27 01:35:07.216926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.739 [2024-07-27 01:35:07.217067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.217093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.217265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.217421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.217447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.217622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.217789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.217814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.217962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.218116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.218142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.218294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.218444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.218469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.218626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.218760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.218785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.218955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.219123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.219149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.219325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.219520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.219545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.219726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.219882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.219907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.220072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.220213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.220237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.220415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.220583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.220609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.220756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.220904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.220932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.221079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.221220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.221246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.221428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.221563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.221588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.221787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.221927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.221953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.222127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.222268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.222293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.222500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.222687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.222713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.222891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.223065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.223092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.223234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.223373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.223399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.223559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.223759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.223784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.223949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.224119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.224146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.224345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.224488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.224513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.224648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.224817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.224843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.224979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.225128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.225153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.225298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.225482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.225508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.225677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.225845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.225871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.226015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.226181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.226207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.740 qpair failed and we were unable to recover it. 00:27:15.740 [2024-07-27 01:35:07.226357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.740 [2024-07-27 01:35:07.226499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.226525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.226713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.226853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.226880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.227037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.227197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.227224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.227381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.227549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.227575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.227748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.227918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.227943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.228092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.228253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.228279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.228452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.228602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.228628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.228791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.228962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.228988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.229150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.229317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.229342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.229491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.229661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.229686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.229829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.229999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.230025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.230215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.230358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.230383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.230560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.230746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.230772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.230969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.231120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.231147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.231321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.231474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.231501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.231670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.231836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.231862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.232071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.232232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.232258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.232444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.232619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.232650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.232822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.232962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.232987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.233139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.233317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.233342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.233519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.233665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.233691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.233845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.234010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.234036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.234230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.234386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.234415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.234594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.234737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.234762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.234903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.235082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.235110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.235301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.235469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.235494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.235657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.235829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.235854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.236003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.236159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.236191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.741 qpair failed and we were unable to recover it. 00:27:15.741 [2024-07-27 01:35:07.236330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.236524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.741 [2024-07-27 01:35:07.236549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.236694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.236827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.236852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.237017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.237198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.237224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.237383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.237548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.237573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.237736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.237904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.237929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.238087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.238240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.238265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.238446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.238614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.238639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.238781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.238950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.238975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.239137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.239284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.239308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.239480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.239629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.239656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.239837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.240006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.240031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.240185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.240333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.240358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.240526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.240700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.240727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.240889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.241086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.241111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.241259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.241397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.241421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.241561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.241695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.241720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.241877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.242039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.242069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.242210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.242349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.242373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.242543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.242709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.242734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.242903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.243052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.243085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.243239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.243410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.243437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.243611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.243777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.243802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.243974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.244129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.244154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.244317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.244483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.244508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.244668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.244833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.244857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.245003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.245175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.245200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.742 [2024-07-27 01:35:07.245354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.245540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.742 [2024-07-27 01:35:07.245565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.742 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.245769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.245905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.245929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.246087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.246242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.246267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.246405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.246571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.246596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.246736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.246906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.246931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.247080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.247251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.247276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.247414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.247553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.247577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.247718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.247904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.247929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.248088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.248270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.248295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.248438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.248600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.248625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.248797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.248940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.248966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.249114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.249272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.249297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.249434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.249602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.249627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.249799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.249937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.249962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.250116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.250263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.250292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.250461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.250601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.250625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.250789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.250925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.250950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.251119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.251316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.251341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.251475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.251613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.251638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.251795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.251967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.251993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.252165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.252309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.252335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.252471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.252639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.252664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.252832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.252973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.252998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.253167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.253321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.253346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.743 qpair failed and we were unable to recover it. 00:27:15.743 [2024-07-27 01:35:07.253501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.743 [2024-07-27 01:35:07.253639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.253664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.253808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.253971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.253996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.254169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.254310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.254337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.254538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.254673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.254698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.254848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.254986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.255012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.255197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.255363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.255388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.255533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.255669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.255693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.255827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.255967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.255991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.256165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.256304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.256329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.256474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.256637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.256661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.256803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.256968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.256992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.257166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.257325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.257349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.257507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.257661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.257686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.257857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.257993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.258018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.258170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.258339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.258365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.258512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.258677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.258701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.258832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.259001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.259026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.259189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.259353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.259378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.259555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.259697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.259721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.259865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.259999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.260024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.260208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.260349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.260374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.260518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.260703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.260728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.260885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.261038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.261069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.261248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.261394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.261420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.261597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.261736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.261760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.261897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.262032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.262056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.262239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.262382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.262407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.262546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.262713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.262738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.744 [2024-07-27 01:35:07.262903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.263082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.744 [2024-07-27 01:35:07.263108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.744 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.263272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.263425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.263450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.263599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.263741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.263766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.263912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.264076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.264101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.264303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.264449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.264473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.264631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.264810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.264834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.264971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.265171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.265197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.265345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.265482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.265507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.265673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.265836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.265861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.266007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.266158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.266185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.266365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.266500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.266525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.266678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.266870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.266894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.267041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.267242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.267268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.267419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.267587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.267617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.267760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.267955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.267980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.268124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.268279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.268305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.268478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.268645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.268669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.268816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.268947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.268972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.269118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.269255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.269280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.269438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.269600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.269626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.269780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.269949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.269974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.270152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.270316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.270341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.270511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.270657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.270681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.270856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.271027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.271051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.271210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.271359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.271384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.271556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.271691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.271716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.271888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.272041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.272072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.272237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.272393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.272418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.745 [2024-07-27 01:35:07.272561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.272697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.745 [2024-07-27 01:35:07.272722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.745 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.272865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.273006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.273032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.273173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.273342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.273367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.273527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.273661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.273686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.273843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.273973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.273998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.274156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.274343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.274368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.274560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.274707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.274733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.274934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.275196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.275224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.275366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.275512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.275538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.275719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.275889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.275914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.276110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.276260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.276285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.276423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.276577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.276602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.276761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.276927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.276952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.277125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.277292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.277317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.277453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.277600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.277626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.277792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.277931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.277956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.278108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.278253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.278279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.278443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.278615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.278640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.278790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.278930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.278955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.279136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.279283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.279308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.279464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.279615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.279640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.279813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.279953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.279978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.280145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.280285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.280310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.280443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.280611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.280636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.280805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.280967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.280992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.281169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.281306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.281331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.281542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.281683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.281708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.281877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.282046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.282078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.282220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.282353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.746 [2024-07-27 01:35:07.282378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.746 qpair failed and we were unable to recover it. 00:27:15.746 [2024-07-27 01:35:07.282520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.282656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.282681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.282846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.282982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.283007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.283171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.283358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.283384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.283531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.283696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.283722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.283865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.284000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.284025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.284180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.284346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.284372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.284532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.284697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.284722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.284916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.285056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.285090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.285241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.285378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.285402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.285543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.285690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.285715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.285856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.285989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.286013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.286163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.286312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.286337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.286483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.286651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.286676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.286813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.286999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.287024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.287198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.287363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.287388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.287541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.287715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.287740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.287889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.288024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.288050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.288210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.288376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.288401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.288558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.288722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.288746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.288893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.289057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.289089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.289276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.289452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.289476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.289609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.289776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.289801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.289991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.290129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.290155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.290329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.290494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.290520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.290658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.290829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.290854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.291023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.291200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.291226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.291403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.291569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.291594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.747 qpair failed and we were unable to recover it. 00:27:15.747 [2024-07-27 01:35:07.291733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.747 [2024-07-27 01:35:07.291884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.291908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.292069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.292207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.292232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.292418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.292550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.292575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.292742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.292908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.292932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.293093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.293252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.293276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.293449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.293601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.293625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.293769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.293916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.293941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.294111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.294287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.294313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.294451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.294604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.294629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.294787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.294963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.294989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.295138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.295293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.295318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.295465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.295655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.295680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.295845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.295994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.296019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.296165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.296322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.296348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.296515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.296653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.296678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.296818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.296977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.297002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.297178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.297320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.297345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.297517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.297656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.297680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.297867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.298068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.298094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.298241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.298395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.298420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.298596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.298758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.298784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.298956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.299098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.299128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.299286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.299456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.299484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.299670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.299837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.748 [2024-07-27 01:35:07.299862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.748 qpair failed and we were unable to recover it. 00:27:15.748 [2024-07-27 01:35:07.300005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.300149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.300174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.300312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.300479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.300504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.300648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.300785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.300809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.300976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.301117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.301143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.301295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.301438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.301463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.301606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.301742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.301766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.301935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.302078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.302104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.302265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.302430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.302455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.302616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.302779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.302804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.302981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.303120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.303145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.303325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.303488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.303513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.303662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.303857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.303882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.304052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.304196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.304221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.304380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.304549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.304574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.304743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.304886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.304910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.305047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.305207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.305233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.305411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.305553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.305578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.305709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.305892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.305916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.306070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.306221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.306246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.306388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.306558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.306583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.306715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.306883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.306908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.307053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.307207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.307232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.307371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.307505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.307530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.307697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.307839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.307863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.308032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.308223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.308250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.308420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.308588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.308613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.308754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.308923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.308947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.309078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.309249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.309274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.749 qpair failed and we were unable to recover it. 00:27:15.749 [2024-07-27 01:35:07.309470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.749 [2024-07-27 01:35:07.309661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.309692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.309869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.310038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.310073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.310252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.310391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.310417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.310604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.310759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.310784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.310961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.311123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.311150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.311295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.311453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.311479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.311653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.311831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.311858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.312034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.312177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.312203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.312401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.312549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.312576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.312756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.312928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.312954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.313117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.313276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.313302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.313464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.313607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.313634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.313777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.313945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.313971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.314118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.314287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.314312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.314505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.314646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.314671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.314843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.315007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.315033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.315224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.315394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.315420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.315564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.315700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.315725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.315898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.316074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.316101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.316300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.316441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.316468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.316649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.316802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.316830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.316999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.317165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.317191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.317360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.317515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.317540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.317744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.317907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.317932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.318099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.318241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.318266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.318440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.318572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.318597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.318740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.318879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.318904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.319040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.319185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.319211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.750 [2024-07-27 01:35:07.319379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.319515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.750 [2024-07-27 01:35:07.319540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.750 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.319689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.319861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.319887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.320025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.320241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.320267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.320408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.320606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.320631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.320807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.320941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.320966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.321138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.321338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.321363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.321502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.321671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.321697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.321896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.322037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.322068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.322264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.322432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.322457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.322630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.322772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.322799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.322976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.323173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.323199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.323337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.323507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.323532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.323673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.323819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.323847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.323991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.324162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.324188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.324359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.324520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.324546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.324734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.324905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.324930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.325121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.325265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.325290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.325432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.325571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.325597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.325739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.325880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.325907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.326084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.326251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.326276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.326451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.326636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.326662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.326820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.326989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.327014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.327195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.327364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.327394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.327569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.327707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.327732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.327897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.328032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.328057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.328212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.328388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.328414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.328558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.328715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.328741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.328927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.329078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.329106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.329255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.329396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.751 [2024-07-27 01:35:07.329422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.751 qpair failed and we were unable to recover it. 00:27:15.751 [2024-07-27 01:35:07.329596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.329739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.329765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.329934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.330106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.330131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.330276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.330467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.330492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.330641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.330799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.330829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.331001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.331167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.331192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.331348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.331498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.331523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.331746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.331888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.331913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.332056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.332209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.332235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.332410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.332549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.332574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.332715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.332879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.332903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.333044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.333192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.333218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.333362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.333532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.333557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.333728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.333900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.333925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.334067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.334211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.334240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.334382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.334579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.334605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.334777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.334919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.334945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.335105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.335277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.335303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.335471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.335637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.335662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.335806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.335961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.335986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.336152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.336296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.336321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.336470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.336649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.336673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.336821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.336958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.336983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.337130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.337299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.337324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.337465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.337622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.337651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.337793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.337963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.337988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.338133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.338266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.338291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.338463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.338633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.338657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.338799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.338956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.338982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.752 qpair failed and we were unable to recover it. 00:27:15.752 [2024-07-27 01:35:07.339145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.752 [2024-07-27 01:35:07.339308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.339334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.339517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.339660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.339685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.339853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.339993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.340018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.340181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.340330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.340356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.340503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.340639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.340664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.340830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.340974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.340999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.341181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.341351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.341376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.341571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.341714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.341741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.341910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.342074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.342100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.342243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.342416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.342441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.342624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.342795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.342820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.342963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.343117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.343144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.343307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.343448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.343475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.343654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.343821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.343845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.343999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.344147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.344173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 01:35:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:15.753 [2024-07-27 01:35:07.344349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.344502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.344532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 01:35:07 -- common/autotest_common.sh@852 -- # return 0 00:27:15.753 [2024-07-27 01:35:07.344708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.344876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.344901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 01:35:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:15.753 [2024-07-27 01:35:07.345048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.345187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.345213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 01:35:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:15.753 [2024-07-27 01:35:07.345359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.345503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.345529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.753 01:35:07 -- common/autotest_common.sh@10 -- # set +x 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.345720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.345912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.345942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.346095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.346275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.346302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.346447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.346627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.346653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.346853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.347009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.347034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde0000b90 with addr=10.0.0.2, port=4420 00:27:15.753 qpair failed and we were unable to recover it. 00:27:15.753 [2024-07-27 01:35:07.347250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.753 [2024-07-27 01:35:07.347403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.347430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.347590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.347735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.347760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.347915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.348103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.348130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.348290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.348429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.348455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.348651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.348820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.348845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.349019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.349182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.349209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.349372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.349527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.349555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.349732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.349865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.349891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.350029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.350181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.350209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.350388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.350519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.350545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.350718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.350859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.350887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.351039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.351219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.351245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.351426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.351588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.351613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.351788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.351963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.351989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.352145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.352292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.352317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.352487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.352656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.352681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.352849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.353018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.353045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.353208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.353376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.353401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.353546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.353682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.353707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.353849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.354017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.354043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.354219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.354382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.354408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.354582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.354716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.354742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.354900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.355073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.355100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.355242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.355417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.355442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.355591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.355754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.355779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.355917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.356083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.356110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.356254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.356423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.356449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.356603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.356800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.356825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.754 qpair failed and we were unable to recover it. 00:27:15.754 [2024-07-27 01:35:07.356965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.754 [2024-07-27 01:35:07.357111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.357139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.357280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.357424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.357450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.357604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.357772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.357797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.357965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.358129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.358155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.358327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.358505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.358532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.358685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.358829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.358854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.359025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.359179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.359207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.359346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.359515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.359540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.359708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.359877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.359901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.360072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.360246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.360271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.360410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.360548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.360573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.360716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.360883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.360908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.361080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.361220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.361246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.361387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.361583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.361608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.361784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.361967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.361993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.362135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.362300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.362325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.362470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.362604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.362629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.362831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.362996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.363022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.363168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.363328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.363353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.363544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.363679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.363704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.363888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.364028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.364053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.364268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.364435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.364460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.364605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.364742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.364767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.364916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.365056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.365091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.365262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.365396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.365426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.365596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.365732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.365757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.365925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.366070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.366096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.366285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.366459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.366485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.755 qpair failed and we were unable to recover it. 00:27:15.755 [2024-07-27 01:35:07.366623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.755 [2024-07-27 01:35:07.366788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.366813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.366994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.367137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.367166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.367305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.367466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.367492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.367628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.367790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.367816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.367973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.368131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 01:35:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.756 [2024-07-27 01:35:07.368158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.368294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 01:35:07 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:15.756 [2024-07-27 01:35:07.368443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.368469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 01:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.756 [2024-07-27 01:35:07.368654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 01:35:07 -- common/autotest_common.sh@10 -- # set +x 00:27:15.756 [2024-07-27 01:35:07.368795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.368832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.368993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.369141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.369167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.369339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.369493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.369518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.369691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.369849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.369874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.370042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.370184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.370209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.370378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.370522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.370549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.370692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.370829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.370853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.371031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.371213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.371241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.371390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.371535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.371561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.371708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.371849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.371877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.372053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.372235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.372260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.372537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.372716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.372742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.372915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.373088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.373115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.373266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.373411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.373436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.373602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.373774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.373799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.373967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.374136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.374161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.374304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.374451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.374476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.374616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.374759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.374787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.374936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.375080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.375106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.375253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.375424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.375449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.375602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.375743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.375770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.756 [2024-07-27 01:35:07.375909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.376094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.756 [2024-07-27 01:35:07.376121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.756 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.376305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.376512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.376538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.376714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.376882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.376907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.377198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.377389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.377414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.377584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.377755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.377780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.377950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.378098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.378133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.378289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.378467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.378492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.378639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.378784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.378809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.378951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.379091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.379116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.379262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.379411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.379436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.379614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.379777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.379802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.379978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.380145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.380170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.380339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.380514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.380539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.380685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.380827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.380852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.380997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.381140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.381167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.381314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.381522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.381547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.381688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.381859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.381884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.382030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.382181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.382207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.382350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.382499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.382526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.382707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.382865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.382890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.383034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.383196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.383222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.383382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.383536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.383561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.383804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.383975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.384000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.384172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.384340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.384366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.384526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.384664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.384690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.384839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.384998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.757 [2024-07-27 01:35:07.385023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.757 qpair failed and we were unable to recover it. 00:27:15.757 [2024-07-27 01:35:07.385175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.385319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.385345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.385521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.385664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.385689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.385838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.385990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.386015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.386225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.386399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.386424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.386570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.386717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.386742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.386897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.387039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.387071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.387256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.387400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.387427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.387587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.387775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.387800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.387973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.388125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.388151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.388331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.388503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.388529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.388670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.388812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.388837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.389013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.389232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.389258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.389446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.389636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.389661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.389805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.389980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.390006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.390166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.390310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.390337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.390523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.390679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.390704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.390856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.391024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.391049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.391220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.391398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.391423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.391596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.391739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.391766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.391913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.392119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.392145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.392308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.392477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.392502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.392670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.392817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.392844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.393013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 Malloc0 00:27:15.758 [2024-07-27 01:35:07.393189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.393215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.393370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 01:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.758 [2024-07-27 01:35:07.393543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.393568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 01:35:07 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:15.758 [2024-07-27 01:35:07.393762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 01:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.758 01:35:07 -- common/autotest_common.sh@10 -- # set +x 00:27:15.758 [2024-07-27 01:35:07.393935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-07-27 01:35:07.393960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-07-27 01:35:07.394133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.394283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.394309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.394452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.394591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.394616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.394792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.394936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.394962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.395109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.395247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.395272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.395414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.395604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.395629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.395818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.395957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.395982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.396158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.396321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.396346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.396517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.396681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.396710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.396784] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.759 [2024-07-27 01:35:07.396882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.397048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.397079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.397231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.397383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.397408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.397576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.397749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.397774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.397916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.398064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.398091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.398244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.398412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.398437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.398591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.398757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.398783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.398951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.399093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.399121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.399292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.399470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.399495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.399647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.399818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.399844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.400016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.400228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.400255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.400428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.400600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.400625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.400768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.400942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.400967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.401142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.401285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.401310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.401466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.401606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.401631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.401791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.401985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.402009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.402163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.402307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.402334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.402519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.402659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.402685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.402831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.402997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.403023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.403206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.403376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.403401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-07-27 01:35:07.403603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.403767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-07-27 01:35:07.403797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.403968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.404141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.404167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.404321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.404454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.404478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.404615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.404785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.404810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.404998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 01:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.760 [2024-07-27 01:35:07.405143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.405169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 01:35:07 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.760 [2024-07-27 01:35:07.405319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 01:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.760 01:35:07 -- common/autotest_common.sh@10 -- # set +x 00:27:15.760 [2024-07-27 01:35:07.405505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.405530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.405670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.405811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.405837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.405999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.406172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.406197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.406338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.406512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.406537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.406684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.406852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.406877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.407072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.407253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.407279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.407441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.407617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.407642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.407808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.407952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.407977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.408132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.408305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.408330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.408509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.408648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.408673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.408849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.408991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.409016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.409163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.409314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.409340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.409513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.409701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.409726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.409895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.410038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.410070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.410248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.410436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.410462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.410643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.410780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.410804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.410954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.411127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.411154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.411301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.411472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.411497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.411635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.411836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.411862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.412033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.412186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.412211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.412359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.412525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.412551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.412692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.412889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.412914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-07-27 01:35:07.413082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 01:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.760 [2024-07-27 01:35:07.413224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-07-27 01:35:07.413248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 01:35:07 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.761 01:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.761 [2024-07-27 01:35:07.413406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 01:35:07 -- common/autotest_common.sh@10 -- # set +x 00:27:15.761 [2024-07-27 01:35:07.413578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.413603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fddd8000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.413823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.414007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.414035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdde8000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.414267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.414431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.414459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.414605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.414763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.414789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.414948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.415105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.415132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.415301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.415469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.415493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.415638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.415785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.415810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.415980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.416126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.416153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.416296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.416466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.416491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.416669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.416812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.416837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.417011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.417186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.417213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.417357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.417528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.417553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.417753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.417898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.417923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.418074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.418237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.418262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.418409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.418564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.418588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.418731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.418893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.418918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.419070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.419233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.419258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.419444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.419583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.419608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.419742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.419908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.419932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.420079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.420222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.420247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.420402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.420576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.420603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.420750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.420908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.420938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.421081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 01:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.761 [2024-07-27 01:35:07.421229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.421254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 01:35:07 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.761 01:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.761 [2024-07-27 01:35:07.421416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 01:35:07 -- common/autotest_common.sh@10 -- # set +x 00:27:15.761 [2024-07-27 01:35:07.421609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.421634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.421772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.421921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.421946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.422097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.422238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.422263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-07-27 01:35:07.422429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.422565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-07-27 01:35:07.422590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.422729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.422908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.422933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.423073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.423246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.423271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.423415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.423558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.423584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.423732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.423930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.423955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.424116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.424259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.424284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.424436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.424606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.424631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.424774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.424946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-07-27 01:35:07.424970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9429f0 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.425084] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.762 [2024-07-27 01:35:07.427610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-07-27 01:35:07.427796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-07-27 01:35:07.427824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-07-27 01:35:07.427840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-07-27 01:35:07.427855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:15.762 [2024-07-27 01:35:07.427890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 01:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.762 01:35:07 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.762 01:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.762 01:35:07 -- common/autotest_common.sh@10 -- # set +x 00:27:15.762 01:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.762 01:35:07 -- host/target_disconnect.sh@58 -- # wait 754295 00:27:15.762 [2024-07-27 01:35:07.437417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-07-27 01:35:07.437562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-07-27 01:35:07.437588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-07-27 01:35:07.437605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-07-27 01:35:07.437618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:15.762 [2024-07-27 01:35:07.437647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.447446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-07-27 01:35:07.447639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-07-27 01:35:07.447666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-07-27 01:35:07.447684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-07-27 01:35:07.447703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:15.762 [2024-07-27 01:35:07.447734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.457483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-07-27 01:35:07.457664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-07-27 01:35:07.457691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-07-27 01:35:07.457708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-07-27 01:35:07.457722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:15.762 [2024-07-27 01:35:07.457752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-07-27 01:35:07.467411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.762 [2024-07-27 01:35:07.467561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.762 [2024-07-27 01:35:07.467588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.762 [2024-07-27 01:35:07.467606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.762 [2024-07-27 01:35:07.467620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:15.762 [2024-07-27 01:35:07.467650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.762 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.477419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.477577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.477604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.477620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.477633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.477662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.487438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.487582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.487608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.487626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.487640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.487670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.497549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.497714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.497741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.497761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.497776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.497806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.507539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.507690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.507716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.507733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.507747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.507778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.517568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.517721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.517747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.517764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.517779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.517808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.527584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.527726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.527751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.527767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.527781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.527809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.537553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.537703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.537729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.537751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.537765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.537795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.547705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.547891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.547918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.547935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.547949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.547978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.557710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.557869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.557894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.557909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.557923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.557952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.567664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.567814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.567848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.567873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.567891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.567922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.577691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.577849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.577874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.577890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.577903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.577933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.587758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.587904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.587929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.587945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.025 [2024-07-27 01:35:07.587959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.025 [2024-07-27 01:35:07.587988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-27 01:35:07.597770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.025 [2024-07-27 01:35:07.597916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.025 [2024-07-27 01:35:07.597942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.025 [2024-07-27 01:35:07.597957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.597971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.598001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.607808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.607993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.608018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.608033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.608047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.608081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.617883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.618029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.618055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.618078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.618093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.618122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.627953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.628114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.628140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.628162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.628177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.628205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.637939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.638111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.638137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.638153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.638167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.638203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.648046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.648211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.648236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.648253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.648267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.648296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.658000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.658151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.658177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.658193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.658208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.658238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.667981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.668138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.668163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.668178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.668191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.668220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.678032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.678192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.678219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.678234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.678248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.678277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.688082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.688229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.688254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.688270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.688284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.688312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.698141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.698327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.698354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.698374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.698388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.698418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.708146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.708341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.708366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.708382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.708395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.708424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.718151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.718314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.718340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.718361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.718376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.718404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-27 01:35:07.728148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.026 [2024-07-27 01:35:07.728292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.026 [2024-07-27 01:35:07.728317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.026 [2024-07-27 01:35:07.728333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.026 [2024-07-27 01:35:07.728346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.026 [2024-07-27 01:35:07.728374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-27 01:35:07.738161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.027 [2024-07-27 01:35:07.738314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.027 [2024-07-27 01:35:07.738339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.027 [2024-07-27 01:35:07.738354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.027 [2024-07-27 01:35:07.738368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.027 [2024-07-27 01:35:07.738396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-27 01:35:07.748289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.027 [2024-07-27 01:35:07.748437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.027 [2024-07-27 01:35:07.748462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.027 [2024-07-27 01:35:07.748478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.027 [2024-07-27 01:35:07.748492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.027 [2024-07-27 01:35:07.748520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-27 01:35:07.758228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.027 [2024-07-27 01:35:07.758378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.027 [2024-07-27 01:35:07.758405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.027 [2024-07-27 01:35:07.758420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.027 [2024-07-27 01:35:07.758434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.027 [2024-07-27 01:35:07.758462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-27 01:35:07.768277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.027 [2024-07-27 01:35:07.768423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.027 [2024-07-27 01:35:07.768449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.027 [2024-07-27 01:35:07.768465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.027 [2024-07-27 01:35:07.768478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.027 [2024-07-27 01:35:07.768506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-27 01:35:07.778315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.027 [2024-07-27 01:35:07.778472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.027 [2024-07-27 01:35:07.778497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.027 [2024-07-27 01:35:07.778512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.027 [2024-07-27 01:35:07.778526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.027 [2024-07-27 01:35:07.778554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.288 [2024-07-27 01:35:07.788299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.288 [2024-07-27 01:35:07.788455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.288 [2024-07-27 01:35:07.788480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.288 [2024-07-27 01:35:07.788496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.288 [2024-07-27 01:35:07.788510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.288 [2024-07-27 01:35:07.788538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.288 qpair failed and we were unable to recover it. 00:27:16.288 [2024-07-27 01:35:07.798345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.288 [2024-07-27 01:35:07.798490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.288 [2024-07-27 01:35:07.798516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.288 [2024-07-27 01:35:07.798531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.288 [2024-07-27 01:35:07.798545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.288 [2024-07-27 01:35:07.798573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.288 qpair failed and we were unable to recover it. 00:27:16.288 [2024-07-27 01:35:07.808378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.288 [2024-07-27 01:35:07.808552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.288 [2024-07-27 01:35:07.808577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.288 [2024-07-27 01:35:07.808606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.288 [2024-07-27 01:35:07.808620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.288 [2024-07-27 01:35:07.808650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.288 qpair failed and we were unable to recover it. 00:27:16.288 [2024-07-27 01:35:07.818399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.288 [2024-07-27 01:35:07.818552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.288 [2024-07-27 01:35:07.818578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.288 [2024-07-27 01:35:07.818598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.288 [2024-07-27 01:35:07.818613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.288 [2024-07-27 01:35:07.818643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.288 qpair failed and we were unable to recover it. 00:27:16.288 [2024-07-27 01:35:07.828436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.288 [2024-07-27 01:35:07.828608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.288 [2024-07-27 01:35:07.828635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.288 [2024-07-27 01:35:07.828651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.288 [2024-07-27 01:35:07.828665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.288 [2024-07-27 01:35:07.828694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.288 qpair failed and we were unable to recover it. 00:27:16.288 [2024-07-27 01:35:07.838439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.288 [2024-07-27 01:35:07.838584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.288 [2024-07-27 01:35:07.838610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.288 [2024-07-27 01:35:07.838625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.288 [2024-07-27 01:35:07.838639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.288 [2024-07-27 01:35:07.838667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.288 qpair failed and we were unable to recover it. 00:27:16.288 [2024-07-27 01:35:07.848522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.288 [2024-07-27 01:35:07.848668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.288 [2024-07-27 01:35:07.848693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.848709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.848723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.848752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.858514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.858677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.858703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.858718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.858732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.858760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.868525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.868675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.868700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.868715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.868729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.868758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.878564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.878710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.878735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.878750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.878764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.878792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.888614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.888759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.888784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.888799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.888813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.888840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.898654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.898808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.898834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.898862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.898878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.898908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.908660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.908803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.908829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.908845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.908860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.908888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.918674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.918821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.918846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.918861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.918875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.918903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.928699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.928842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.928867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.928883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.928897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.928926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.938744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.938913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.938939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.938954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.938968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.938996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.948748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.948900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.948925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.948941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.948954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.948983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.958855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.959015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.959041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.959056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.959081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.959110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.968809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.968985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.969012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.969031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.289 [2024-07-27 01:35:07.969046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.289 [2024-07-27 01:35:07.969084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.289 qpair failed and we were unable to recover it. 00:27:16.289 [2024-07-27 01:35:07.978842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.289 [2024-07-27 01:35:07.978994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.289 [2024-07-27 01:35:07.979020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.289 [2024-07-27 01:35:07.979036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.290 [2024-07-27 01:35:07.979050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.290 [2024-07-27 01:35:07.979086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.290 qpair failed and we were unable to recover it. 00:27:16.290 [2024-07-27 01:35:07.988860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.290 [2024-07-27 01:35:07.989007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.290 [2024-07-27 01:35:07.989037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.290 [2024-07-27 01:35:07.989055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.290 [2024-07-27 01:35:07.989077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.290 [2024-07-27 01:35:07.989108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.290 qpair failed and we were unable to recover it. 00:27:16.290 [2024-07-27 01:35:07.998909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.290 [2024-07-27 01:35:07.999053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.290 [2024-07-27 01:35:07.999086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.290 [2024-07-27 01:35:07.999102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.290 [2024-07-27 01:35:07.999115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.290 [2024-07-27 01:35:07.999144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.290 qpair failed and we were unable to recover it. 00:27:16.290 [2024-07-27 01:35:08.008924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.290 [2024-07-27 01:35:08.009074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.290 [2024-07-27 01:35:08.009100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.290 [2024-07-27 01:35:08.009115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.290 [2024-07-27 01:35:08.009129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.290 [2024-07-27 01:35:08.009157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.290 qpair failed and we were unable to recover it. 00:27:16.290 [2024-07-27 01:35:08.018961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.290 [2024-07-27 01:35:08.019128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.290 [2024-07-27 01:35:08.019153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.290 [2024-07-27 01:35:08.019169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.290 [2024-07-27 01:35:08.019183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.290 [2024-07-27 01:35:08.019212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.290 qpair failed and we were unable to recover it. 00:27:16.290 [2024-07-27 01:35:08.029078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.290 [2024-07-27 01:35:08.029228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.290 [2024-07-27 01:35:08.029254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.290 [2024-07-27 01:35:08.029270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.290 [2024-07-27 01:35:08.029284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.290 [2024-07-27 01:35:08.029313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.290 qpair failed and we were unable to recover it. 00:27:16.290 [2024-07-27 01:35:08.039017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.290 [2024-07-27 01:35:08.039186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.290 [2024-07-27 01:35:08.039212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.290 [2024-07-27 01:35:08.039228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.290 [2024-07-27 01:35:08.039241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.290 [2024-07-27 01:35:08.039270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.290 qpair failed and we were unable to recover it. 00:27:16.550 [2024-07-27 01:35:08.049155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.550 [2024-07-27 01:35:08.049325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.550 [2024-07-27 01:35:08.049352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.550 [2024-07-27 01:35:08.049368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.550 [2024-07-27 01:35:08.049382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.550 [2024-07-27 01:35:08.049410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.550 qpair failed and we were unable to recover it. 00:27:16.550 [2024-07-27 01:35:08.059198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.550 [2024-07-27 01:35:08.059358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.550 [2024-07-27 01:35:08.059383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.550 [2024-07-27 01:35:08.059399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.550 [2024-07-27 01:35:08.059413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.550 [2024-07-27 01:35:08.059442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.550 qpair failed and we were unable to recover it. 00:27:16.550 [2024-07-27 01:35:08.069104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.550 [2024-07-27 01:35:08.069260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.550 [2024-07-27 01:35:08.069285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.550 [2024-07-27 01:35:08.069301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.550 [2024-07-27 01:35:08.069316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.550 [2024-07-27 01:35:08.069345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.550 qpair failed and we were unable to recover it. 00:27:16.550 [2024-07-27 01:35:08.079228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.550 [2024-07-27 01:35:08.079381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.550 [2024-07-27 01:35:08.079412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.550 [2024-07-27 01:35:08.079429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.550 [2024-07-27 01:35:08.079443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.550 [2024-07-27 01:35:08.079472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.550 qpair failed and we were unable to recover it. 00:27:16.550 [2024-07-27 01:35:08.089178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.550 [2024-07-27 01:35:08.089349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.550 [2024-07-27 01:35:08.089376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.550 [2024-07-27 01:35:08.089392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.550 [2024-07-27 01:35:08.089416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.550 [2024-07-27 01:35:08.089446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.550 qpair failed and we were unable to recover it. 00:27:16.550 [2024-07-27 01:35:08.099217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.099368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.099395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.099410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.099424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.099453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.109210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.109357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.109383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.109399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.109412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.109442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.119234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.119377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.119403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.119418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.119432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.119460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.129295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.129445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.129471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.129486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.129500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.129528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.139412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.139561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.139586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.139601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.139615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.139643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.149348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.149492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.149517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.149533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.149547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.149575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.159459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.159601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.159626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.159642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.159656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.159685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.169392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.169535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.169564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.169581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.169595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.169623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.179429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.179576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.179601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.179616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.179630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.179658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.189455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.189601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.189625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.189641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.189656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.189683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.199497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.199648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.199673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.199688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.551 [2024-07-27 01:35:08.199702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.551 [2024-07-27 01:35:08.199730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.551 qpair failed and we were unable to recover it. 00:27:16.551 [2024-07-27 01:35:08.209589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.551 [2024-07-27 01:35:08.209737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.551 [2024-07-27 01:35:08.209762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.551 [2024-07-27 01:35:08.209777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.209791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.209825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.552 [2024-07-27 01:35:08.219592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.552 [2024-07-27 01:35:08.219769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.552 [2024-07-27 01:35:08.219794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.552 [2024-07-27 01:35:08.219810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.219823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.219852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.552 [2024-07-27 01:35:08.229650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.552 [2024-07-27 01:35:08.229811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.552 [2024-07-27 01:35:08.229837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.552 [2024-07-27 01:35:08.229864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.229878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.229907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.552 [2024-07-27 01:35:08.239591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.552 [2024-07-27 01:35:08.239736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.552 [2024-07-27 01:35:08.239763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.552 [2024-07-27 01:35:08.239779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.239793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.239821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.552 [2024-07-27 01:35:08.249716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.552 [2024-07-27 01:35:08.249895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.552 [2024-07-27 01:35:08.249920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.552 [2024-07-27 01:35:08.249936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.249950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.249978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.552 [2024-07-27 01:35:08.259697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.552 [2024-07-27 01:35:08.259874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.552 [2024-07-27 01:35:08.259905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.552 [2024-07-27 01:35:08.259921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.259936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.259965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.552 [2024-07-27 01:35:08.269712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.552 [2024-07-27 01:35:08.269869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.552 [2024-07-27 01:35:08.269895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.552 [2024-07-27 01:35:08.269910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.269924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.269952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.552 [2024-07-27 01:35:08.279758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.552 [2024-07-27 01:35:08.279914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.552 [2024-07-27 01:35:08.279940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.552 [2024-07-27 01:35:08.279955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.279969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.279999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.552 [2024-07-27 01:35:08.289758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.552 [2024-07-27 01:35:08.289936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.552 [2024-07-27 01:35:08.289962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.552 [2024-07-27 01:35:08.289977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.289991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.290019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.552 [2024-07-27 01:35:08.299794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.552 [2024-07-27 01:35:08.299952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.552 [2024-07-27 01:35:08.299977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.552 [2024-07-27 01:35:08.299993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.552 [2024-07-27 01:35:08.300007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.552 [2024-07-27 01:35:08.300041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.552 qpair failed and we were unable to recover it. 00:27:16.812 [2024-07-27 01:35:08.309801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.812 [2024-07-27 01:35:08.309944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.812 [2024-07-27 01:35:08.309969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.812 [2024-07-27 01:35:08.309984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.812 [2024-07-27 01:35:08.309997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.812 [2024-07-27 01:35:08.310025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.812 qpair failed and we were unable to recover it. 00:27:16.812 [2024-07-27 01:35:08.319837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.812 [2024-07-27 01:35:08.319985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.812 [2024-07-27 01:35:08.320010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.812 [2024-07-27 01:35:08.320026] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.812 [2024-07-27 01:35:08.320041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.812 [2024-07-27 01:35:08.320077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.812 qpair failed and we were unable to recover it. 00:27:16.812 [2024-07-27 01:35:08.329849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.812 [2024-07-27 01:35:08.329993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.812 [2024-07-27 01:35:08.330018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.812 [2024-07-27 01:35:08.330034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.812 [2024-07-27 01:35:08.330047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.812 [2024-07-27 01:35:08.330085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.812 qpair failed and we were unable to recover it. 00:27:16.812 [2024-07-27 01:35:08.339896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.812 [2024-07-27 01:35:08.340043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.812 [2024-07-27 01:35:08.340076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.812 [2024-07-27 01:35:08.340092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.812 [2024-07-27 01:35:08.340106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.812 [2024-07-27 01:35:08.340134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.812 qpair failed and we were unable to recover it. 00:27:16.812 [2024-07-27 01:35:08.349931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.812 [2024-07-27 01:35:08.350136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.812 [2024-07-27 01:35:08.350167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.812 [2024-07-27 01:35:08.350184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.812 [2024-07-27 01:35:08.350198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.812 [2024-07-27 01:35:08.350226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.812 qpair failed and we were unable to recover it. 00:27:16.812 [2024-07-27 01:35:08.359974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.812 [2024-07-27 01:35:08.360119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.812 [2024-07-27 01:35:08.360144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.812 [2024-07-27 01:35:08.360160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.812 [2024-07-27 01:35:08.360174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.812 [2024-07-27 01:35:08.360202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.812 qpair failed and we were unable to recover it. 00:27:16.812 [2024-07-27 01:35:08.369980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.812 [2024-07-27 01:35:08.370124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.812 [2024-07-27 01:35:08.370149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.812 [2024-07-27 01:35:08.370164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.370177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.370205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.380021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.380175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.380200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.380215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.380229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.380259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.390028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.390203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.390228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.390244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.390258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.390292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.400067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.400224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.400249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.400264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.400278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.400307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.410097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.410252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.410279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.410299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.410315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.410344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.420149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.420308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.420334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.420350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.420364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.420392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.430145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.430293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.430317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.430332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.430345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.430374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.440174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.440349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.440379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.440395] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.440409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.440437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.450221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.450369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.450395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.450411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.450425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.450454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.460265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.460414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.460440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.460456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.460471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.460499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.470353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.470517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.470543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.470561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.470576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.470606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.480291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.480438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.480464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.813 [2024-07-27 01:35:08.480479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.813 [2024-07-27 01:35:08.480493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.813 [2024-07-27 01:35:08.480527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.813 qpair failed and we were unable to recover it. 00:27:16.813 [2024-07-27 01:35:08.490318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.813 [2024-07-27 01:35:08.490461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.813 [2024-07-27 01:35:08.490486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.814 [2024-07-27 01:35:08.490502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.814 [2024-07-27 01:35:08.490517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.814 [2024-07-27 01:35:08.490545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.814 qpair failed and we were unable to recover it. 00:27:16.814 [2024-07-27 01:35:08.500351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.814 [2024-07-27 01:35:08.500500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.814 [2024-07-27 01:35:08.500524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.814 [2024-07-27 01:35:08.500540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.814 [2024-07-27 01:35:08.500553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.814 [2024-07-27 01:35:08.500581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.814 qpair failed and we were unable to recover it. 00:27:16.814 [2024-07-27 01:35:08.510378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.814 [2024-07-27 01:35:08.510526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.814 [2024-07-27 01:35:08.510550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.814 [2024-07-27 01:35:08.510566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.814 [2024-07-27 01:35:08.510580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.814 [2024-07-27 01:35:08.510608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.814 qpair failed and we were unable to recover it. 00:27:16.814 [2024-07-27 01:35:08.520404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.814 [2024-07-27 01:35:08.520549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.814 [2024-07-27 01:35:08.520574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.814 [2024-07-27 01:35:08.520590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.814 [2024-07-27 01:35:08.520603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.814 [2024-07-27 01:35:08.520631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.814 qpair failed and we were unable to recover it. 00:27:16.814 [2024-07-27 01:35:08.530457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.814 [2024-07-27 01:35:08.530649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.814 [2024-07-27 01:35:08.530680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.814 [2024-07-27 01:35:08.530696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.814 [2024-07-27 01:35:08.530710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.814 [2024-07-27 01:35:08.530738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.814 qpair failed and we were unable to recover it. 00:27:16.814 [2024-07-27 01:35:08.540572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.814 [2024-07-27 01:35:08.540720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.814 [2024-07-27 01:35:08.540745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.814 [2024-07-27 01:35:08.540761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.814 [2024-07-27 01:35:08.540775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.814 [2024-07-27 01:35:08.540803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.814 qpair failed and we were unable to recover it. 00:27:16.814 [2024-07-27 01:35:08.550529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.814 [2024-07-27 01:35:08.550708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.814 [2024-07-27 01:35:08.550734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.814 [2024-07-27 01:35:08.550749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.814 [2024-07-27 01:35:08.550763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.814 [2024-07-27 01:35:08.550791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.814 qpair failed and we were unable to recover it. 00:27:16.814 [2024-07-27 01:35:08.560588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.814 [2024-07-27 01:35:08.560734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.814 [2024-07-27 01:35:08.560762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.814 [2024-07-27 01:35:08.560782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.814 [2024-07-27 01:35:08.560796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:16.814 [2024-07-27 01:35:08.560827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:16.814 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.570594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.570744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.570770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.570786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.570800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.570838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.580594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.580744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.580769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.580784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.580798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.580826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.590644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.590789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.590814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.590830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.590844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.590872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.600717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.600890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.600916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.600931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.600946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.600974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.610664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.610803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.610829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.610845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.610859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.610887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.620733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.620882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.620912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.620929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.620943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.620971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.630776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.630921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.630947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.630962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.630975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.631004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.640749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.640936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.640961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.640977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.640990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.641018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.650770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.650920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.650945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.650961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.650975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.651004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.660811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.661005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.074 [2024-07-27 01:35:08.661031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.074 [2024-07-27 01:35:08.661047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.074 [2024-07-27 01:35:08.661075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.074 [2024-07-27 01:35:08.661107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-27 01:35:08.670864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.074 [2024-07-27 01:35:08.671026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.671052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.671078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.671093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.671122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.680871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.681021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.681047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.681074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.681090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.681120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.690913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.691083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.691109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.691124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.691138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.691166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.700951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.701112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.701137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.701153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.701167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.701196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.710956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.711128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.711159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.711175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.711189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.711218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.720983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.721146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.721172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.721188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.721202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.721231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.731014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.731186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.731212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.731227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.731241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.731271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.741036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.741219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.741244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.741260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.741274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.741303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.751089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.751264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.751289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.751305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.751324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.751355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.761100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.761281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.761308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.761328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.761342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.761373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.771178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.771359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.771384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.771400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.771414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.771444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.781197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.781400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.781426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.781442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.781456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.781484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.791271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.791419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.791444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.075 [2024-07-27 01:35:08.791460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.075 [2024-07-27 01:35:08.791474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.075 [2024-07-27 01:35:08.791503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-27 01:35:08.801230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.075 [2024-07-27 01:35:08.801382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.075 [2024-07-27 01:35:08.801408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.076 [2024-07-27 01:35:08.801423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.076 [2024-07-27 01:35:08.801437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.076 [2024-07-27 01:35:08.801466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-27 01:35:08.811249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.076 [2024-07-27 01:35:08.811389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.076 [2024-07-27 01:35:08.811415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.076 [2024-07-27 01:35:08.811430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.076 [2024-07-27 01:35:08.811443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.076 [2024-07-27 01:35:08.811471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-27 01:35:08.821307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.076 [2024-07-27 01:35:08.821457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.076 [2024-07-27 01:35:08.821482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.076 [2024-07-27 01:35:08.821498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.076 [2024-07-27 01:35:08.821512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.076 [2024-07-27 01:35:08.821540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.336 [2024-07-27 01:35:08.831301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.336 [2024-07-27 01:35:08.831449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.336 [2024-07-27 01:35:08.831475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.336 [2024-07-27 01:35:08.831490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.336 [2024-07-27 01:35:08.831504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.336 [2024-07-27 01:35:08.831532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.841348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.841568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.841593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.841608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.841628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.841657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.851382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.851559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.851585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.851600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.851614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.851641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.861439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.861605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.861631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.861647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.861661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.861689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.871505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.871655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.871680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.871695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.871709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.871738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.881475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.881626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.881651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.881667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.881681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.881709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.891579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.891751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.891776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.891792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.891806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.891835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.901567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.901724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.901750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.901765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.901779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.901807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.911584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.911756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.911782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.911798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.911812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.911841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.921541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.921681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.921706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.921721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.921735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.921765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.931596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.931736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.931761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.931777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.931796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.931825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.941651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.941803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.941828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.941844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.941858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.941886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.951648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.951798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.951824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.951840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.951854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.951882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.337 [2024-07-27 01:35:08.961720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.337 [2024-07-27 01:35:08.961903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.337 [2024-07-27 01:35:08.961928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.337 [2024-07-27 01:35:08.961944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.337 [2024-07-27 01:35:08.961958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.337 [2024-07-27 01:35:08.961987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.337 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:08.971712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:08.971854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:08.971880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:08.971896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:08.971910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:08.971938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:08.981773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:08.981972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:08.981997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:08.982013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:08.982027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:08.982055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:08.991767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:08.991921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:08.991946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:08.991962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:08.991975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:08.992004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.001897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.002073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.002099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.002115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.002129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.002158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.011823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.011959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.011984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.012000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.012013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.012041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.021910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.022074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.022099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.022115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.022134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.022164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.031886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.032035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.032066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.032085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.032099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.032128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.041942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.042087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.042113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.042129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.042142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.042171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.052010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.052227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.052254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.052274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.052289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.052319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.062023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.062236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.062262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.062278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.062292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.062321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.072011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.072171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.072198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.072214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.072227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.072256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.082046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.082211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.082237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.082254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.082268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.082298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.338 [2024-07-27 01:35:09.092102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.338 [2024-07-27 01:35:09.092251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.338 [2024-07-27 01:35:09.092275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.338 [2024-07-27 01:35:09.092291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.338 [2024-07-27 01:35:09.092305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.338 [2024-07-27 01:35:09.092334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.338 qpair failed and we were unable to recover it. 00:27:17.599 [2024-07-27 01:35:09.102124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.599 [2024-07-27 01:35:09.102281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.599 [2024-07-27 01:35:09.102307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.599 [2024-07-27 01:35:09.102323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.599 [2024-07-27 01:35:09.102336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.599 [2024-07-27 01:35:09.102364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.599 qpair failed and we were unable to recover it. 00:27:17.599 [2024-07-27 01:35:09.112124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.599 [2024-07-27 01:35:09.112287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.599 [2024-07-27 01:35:09.112312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.599 [2024-07-27 01:35:09.112327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.599 [2024-07-27 01:35:09.112347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.599 [2024-07-27 01:35:09.112378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.599 qpair failed and we were unable to recover it. 00:27:17.599 [2024-07-27 01:35:09.122206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.599 [2024-07-27 01:35:09.122369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.599 [2024-07-27 01:35:09.122395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.599 [2024-07-27 01:35:09.122410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.599 [2024-07-27 01:35:09.122425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.599 [2024-07-27 01:35:09.122454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.599 qpair failed and we were unable to recover it. 00:27:17.599 [2024-07-27 01:35:09.132182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.599 [2024-07-27 01:35:09.132326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.599 [2024-07-27 01:35:09.132351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.599 [2024-07-27 01:35:09.132367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.132380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.132409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.142309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.142457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.142482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.142498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.142511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.142540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.152246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.152394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.152419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.152435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.152449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.152477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.162280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.162428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.162453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.162469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.162483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.162512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.172340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.172487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.172513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.172528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.172542] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.172570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.182364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.182515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.182540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.182556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.182569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.182598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.192370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.192545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.192570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.192586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.192599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.192628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.202503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.202655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.202681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.202702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.202716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.202747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.212408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.212554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.212580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.212595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.212609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.212637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.222484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.222672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.222698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.222713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.222726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.222755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.232463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.232611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.232637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.232653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.232667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.232698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.242530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.242677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.242703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.242719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.242733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.242762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.252516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.252660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.252687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.252702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.252716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.600 [2024-07-27 01:35:09.252745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.600 qpair failed and we were unable to recover it. 00:27:17.600 [2024-07-27 01:35:09.262558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.600 [2024-07-27 01:35:09.262713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.600 [2024-07-27 01:35:09.262739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.600 [2024-07-27 01:35:09.262755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.600 [2024-07-27 01:35:09.262769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.262797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.601 [2024-07-27 01:35:09.272655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.601 [2024-07-27 01:35:09.272819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.601 [2024-07-27 01:35:09.272845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.601 [2024-07-27 01:35:09.272860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.601 [2024-07-27 01:35:09.272874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.272902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.601 [2024-07-27 01:35:09.282647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.601 [2024-07-27 01:35:09.282837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.601 [2024-07-27 01:35:09.282862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.601 [2024-07-27 01:35:09.282878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.601 [2024-07-27 01:35:09.282893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.282922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.601 [2024-07-27 01:35:09.292633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.601 [2024-07-27 01:35:09.292775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.601 [2024-07-27 01:35:09.292800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.601 [2024-07-27 01:35:09.292822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.601 [2024-07-27 01:35:09.292837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.292867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.601 [2024-07-27 01:35:09.302694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.601 [2024-07-27 01:35:09.302879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.601 [2024-07-27 01:35:09.302905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.601 [2024-07-27 01:35:09.302920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.601 [2024-07-27 01:35:09.302935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.302963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.601 [2024-07-27 01:35:09.312686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.601 [2024-07-27 01:35:09.312827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.601 [2024-07-27 01:35:09.312852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.601 [2024-07-27 01:35:09.312868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.601 [2024-07-27 01:35:09.312881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.312909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.601 [2024-07-27 01:35:09.322700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.601 [2024-07-27 01:35:09.322860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.601 [2024-07-27 01:35:09.322886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.601 [2024-07-27 01:35:09.322901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.601 [2024-07-27 01:35:09.322915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.322944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.601 [2024-07-27 01:35:09.332832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.601 [2024-07-27 01:35:09.333002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.601 [2024-07-27 01:35:09.333027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.601 [2024-07-27 01:35:09.333043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.601 [2024-07-27 01:35:09.333056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.333098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.601 [2024-07-27 01:35:09.342805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.601 [2024-07-27 01:35:09.342964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.601 [2024-07-27 01:35:09.342990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.601 [2024-07-27 01:35:09.343005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.601 [2024-07-27 01:35:09.343019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.343048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.601 [2024-07-27 01:35:09.352882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.601 [2024-07-27 01:35:09.353035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.601 [2024-07-27 01:35:09.353065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.601 [2024-07-27 01:35:09.353083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.601 [2024-07-27 01:35:09.353098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.601 [2024-07-27 01:35:09.353126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.601 qpair failed and we were unable to recover it. 00:27:17.862 [2024-07-27 01:35:09.362846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.862 [2024-07-27 01:35:09.363010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.862 [2024-07-27 01:35:09.363035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.862 [2024-07-27 01:35:09.363050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.862 [2024-07-27 01:35:09.363071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.862 [2024-07-27 01:35:09.363102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.862 qpair failed and we were unable to recover it. 00:27:17.862 [2024-07-27 01:35:09.372947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.862 [2024-07-27 01:35:09.373145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.862 [2024-07-27 01:35:09.373173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.862 [2024-07-27 01:35:09.373191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.373207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.373237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.382881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.383032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.383065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.383090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.383105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.383134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.392905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.393052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.393084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.393100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.393113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.393143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.402923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.403069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.403095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.403111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.403124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.403153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.412944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.413097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.413122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.413138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.413151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.413179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.423116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.423269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.423293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.423309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.423323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.423351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.433025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.433183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.433207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.433222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.433235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.433263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.443113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.443255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.443281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.443296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.443310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.443339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.453150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.453298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.453324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.453339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.453353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.453382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.463115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.463272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.463297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.463313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.463327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.463355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.473229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.473399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.473424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.473446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.473461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.473490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.483212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.483387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.483413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.483429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.483442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.483471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.493243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.493390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.493417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.493432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.493446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.493475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.863 qpair failed and we were unable to recover it. 00:27:17.863 [2024-07-27 01:35:09.503240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.863 [2024-07-27 01:35:09.503387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.863 [2024-07-27 01:35:09.503412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.863 [2024-07-27 01:35:09.503428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.863 [2024-07-27 01:35:09.503442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.863 [2024-07-27 01:35:09.503472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.513350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.513500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.513525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.513541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.513555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.513583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.523274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.523425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.523451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.523466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.523479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.523508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.533305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.533451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.533477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.533492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.533506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.533534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.543382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.543542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.543569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.543588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.543602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.543633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.553359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.553506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.553531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.553547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.553561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.553589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.563392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.563575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.563603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.563626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.563640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.563671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.573435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.573580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.573606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.573621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.573635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.573663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.583455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.583652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.583677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.583693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.583707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.583736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.593503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.593682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.593707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.593723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.593737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.593764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.603605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.603747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.603772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.603787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.603801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.603830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:17.864 [2024-07-27 01:35:09.613593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.864 [2024-07-27 01:35:09.613771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.864 [2024-07-27 01:35:09.613796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.864 [2024-07-27 01:35:09.613811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.864 [2024-07-27 01:35:09.613825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:17.864 [2024-07-27 01:35:09.613854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.864 qpair failed and we were unable to recover it. 00:27:18.124 [2024-07-27 01:35:09.623583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.124 [2024-07-27 01:35:09.623754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.124 [2024-07-27 01:35:09.623781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.124 [2024-07-27 01:35:09.623796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.124 [2024-07-27 01:35:09.623810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.124 [2024-07-27 01:35:09.623839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.124 qpair failed and we were unable to recover it. 00:27:18.124 [2024-07-27 01:35:09.633600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.124 [2024-07-27 01:35:09.633752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.124 [2024-07-27 01:35:09.633778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.124 [2024-07-27 01:35:09.633794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.124 [2024-07-27 01:35:09.633808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.124 [2024-07-27 01:35:09.633838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.124 qpair failed and we were unable to recover it. 00:27:18.124 [2024-07-27 01:35:09.643792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.124 [2024-07-27 01:35:09.643972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.124 [2024-07-27 01:35:09.643997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.124 [2024-07-27 01:35:09.644013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.124 [2024-07-27 01:35:09.644027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.124 [2024-07-27 01:35:09.644055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.124 qpair failed and we were unable to recover it. 00:27:18.124 [2024-07-27 01:35:09.653690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.124 [2024-07-27 01:35:09.653840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.124 [2024-07-27 01:35:09.653871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.124 [2024-07-27 01:35:09.653888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.124 [2024-07-27 01:35:09.653902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.124 [2024-07-27 01:35:09.653931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.124 qpair failed and we were unable to recover it. 00:27:18.124 [2024-07-27 01:35:09.663733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.124 [2024-07-27 01:35:09.663880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.124 [2024-07-27 01:35:09.663905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.124 [2024-07-27 01:35:09.663923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.124 [2024-07-27 01:35:09.663938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.124 [2024-07-27 01:35:09.663968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.124 qpair failed and we were unable to recover it. 00:27:18.124 [2024-07-27 01:35:09.673778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.124 [2024-07-27 01:35:09.673931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.124 [2024-07-27 01:35:09.673957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.124 [2024-07-27 01:35:09.673973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.124 [2024-07-27 01:35:09.673987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.124 [2024-07-27 01:35:09.674015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.124 qpair failed and we were unable to recover it. 00:27:18.124 [2024-07-27 01:35:09.683847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.124 [2024-07-27 01:35:09.683993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.124 [2024-07-27 01:35:09.684018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.124 [2024-07-27 01:35:09.684034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.124 [2024-07-27 01:35:09.684047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.124 [2024-07-27 01:35:09.684083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.124 qpair failed and we were unable to recover it. 00:27:18.124 [2024-07-27 01:35:09.693773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.693947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.693974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.693993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.694008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.694039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.703816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.703969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.703995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.704012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.704026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.704056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.713793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.713937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.713963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.713978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.713992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.714021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.723830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.723999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.724025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.724040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.724054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.724091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.733883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.734041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.734073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.734090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.734104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.734133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.743921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.744077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.744107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.744124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.744138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.744166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.753913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.754073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.754098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.754113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.754127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.754157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.763955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.764102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.764127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.764143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.764157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.764185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.773984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.774155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.774180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.774195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.774209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.774237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.784001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.784148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.784174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.784189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.784204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.784232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.794043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.794205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.794230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.794246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.794260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.794291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.804069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.804219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.804245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.125 [2024-07-27 01:35:09.804261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.125 [2024-07-27 01:35:09.804276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.125 [2024-07-27 01:35:09.804304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.125 qpair failed and we were unable to recover it. 00:27:18.125 [2024-07-27 01:35:09.814132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.125 [2024-07-27 01:35:09.814277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.125 [2024-07-27 01:35:09.814303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.126 [2024-07-27 01:35:09.814319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.126 [2024-07-27 01:35:09.814334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.126 [2024-07-27 01:35:09.814364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.126 qpair failed and we were unable to recover it. 00:27:18.126 [2024-07-27 01:35:09.824223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.126 [2024-07-27 01:35:09.824373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.126 [2024-07-27 01:35:09.824398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.126 [2024-07-27 01:35:09.824414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.126 [2024-07-27 01:35:09.824427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.126 [2024-07-27 01:35:09.824456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.126 qpair failed and we were unable to recover it. 00:27:18.126 [2024-07-27 01:35:09.834168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.126 [2024-07-27 01:35:09.834350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.126 [2024-07-27 01:35:09.834382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.126 [2024-07-27 01:35:09.834401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.126 [2024-07-27 01:35:09.834417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.126 [2024-07-27 01:35:09.834447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.126 qpair failed and we were unable to recover it. 00:27:18.126 [2024-07-27 01:35:09.844187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.126 [2024-07-27 01:35:09.844344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.126 [2024-07-27 01:35:09.844369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.126 [2024-07-27 01:35:09.844385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.126 [2024-07-27 01:35:09.844399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.126 [2024-07-27 01:35:09.844427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.126 qpair failed and we were unable to recover it. 00:27:18.126 [2024-07-27 01:35:09.854232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.126 [2024-07-27 01:35:09.854383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.126 [2024-07-27 01:35:09.854408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.126 [2024-07-27 01:35:09.854425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.126 [2024-07-27 01:35:09.854439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.126 [2024-07-27 01:35:09.854468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.126 qpair failed and we were unable to recover it. 00:27:18.126 [2024-07-27 01:35:09.864275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.126 [2024-07-27 01:35:09.864471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.126 [2024-07-27 01:35:09.864496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.126 [2024-07-27 01:35:09.864511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.126 [2024-07-27 01:35:09.864526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.126 [2024-07-27 01:35:09.864554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.126 qpair failed and we were unable to recover it. 00:27:18.126 [2024-07-27 01:35:09.874292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.126 [2024-07-27 01:35:09.874462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.126 [2024-07-27 01:35:09.874489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.126 [2024-07-27 01:35:09.874506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.126 [2024-07-27 01:35:09.874524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.126 [2024-07-27 01:35:09.874560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.126 qpair failed and we were unable to recover it. 00:27:18.385 [2024-07-27 01:35:09.884322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.385 [2024-07-27 01:35:09.884471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.385 [2024-07-27 01:35:09.884497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.385 [2024-07-27 01:35:09.884512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.385 [2024-07-27 01:35:09.884527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.385 [2024-07-27 01:35:09.884555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-07-27 01:35:09.894310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.385 [2024-07-27 01:35:09.894456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.385 [2024-07-27 01:35:09.894481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.385 [2024-07-27 01:35:09.894497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.385 [2024-07-27 01:35:09.894511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.385 [2024-07-27 01:35:09.894541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-07-27 01:35:09.904396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.385 [2024-07-27 01:35:09.904542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.385 [2024-07-27 01:35:09.904568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.385 [2024-07-27 01:35:09.904586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.385 [2024-07-27 01:35:09.904600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.385 [2024-07-27 01:35:09.904630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.385 [2024-07-27 01:35:09.914379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.385 [2024-07-27 01:35:09.914526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.385 [2024-07-27 01:35:09.914551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.385 [2024-07-27 01:35:09.914567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.385 [2024-07-27 01:35:09.914581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.385 [2024-07-27 01:35:09.914610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.385 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:09.924421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:09.924587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:09.924618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:09.924634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:09.924648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:09.924677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:09.934561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:09.934723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:09.934749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:09.934765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:09.934779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:09.934808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:09.944471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:09.944633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:09.944657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:09.944673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:09.944687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:09.944715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:09.954505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:09.954652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:09.954677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:09.954693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:09.954707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:09.954737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:09.964560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:09.964706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:09.964731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:09.964747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:09.964762] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:09.964795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:09.974605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:09.974797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:09.974823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:09.974838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:09.974851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:09.974879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:09.984641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:09.984791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:09.984817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:09.984834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:09.984848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:09.984878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:09.994620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:09.994770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:09.994795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:09.994811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:09.994825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:09.994852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:10.004733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:10.004905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:10.004932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:10.004950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:10.004964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:10.004993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:10.014718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:10.014884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:10.014919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:10.014938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:10.014953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:10.014983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:10.024757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:10.024908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:10.024934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:10.024950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:10.024964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:10.024993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:10.034893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:10.035074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:10.035100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:10.035116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:10.035130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:10.035159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:10.044799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:10.044960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:10.044986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.386 [2024-07-27 01:35:10.045002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.386 [2024-07-27 01:35:10.045016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.386 [2024-07-27 01:35:10.045044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.386 qpair failed and we were unable to recover it. 00:27:18.386 [2024-07-27 01:35:10.054846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.386 [2024-07-27 01:35:10.054989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.386 [2024-07-27 01:35:10.055015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.387 [2024-07-27 01:35:10.055030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.387 [2024-07-27 01:35:10.055044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.387 [2024-07-27 01:35:10.055090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-07-27 01:35:10.064862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.387 [2024-07-27 01:35:10.065016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.387 [2024-07-27 01:35:10.065041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.387 [2024-07-27 01:35:10.065056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.387 [2024-07-27 01:35:10.065077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.387 [2024-07-27 01:35:10.065107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-07-27 01:35:10.074888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.387 [2024-07-27 01:35:10.075091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.387 [2024-07-27 01:35:10.075116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.387 [2024-07-27 01:35:10.075132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.387 [2024-07-27 01:35:10.075146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.387 [2024-07-27 01:35:10.075175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-07-27 01:35:10.084898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.387 [2024-07-27 01:35:10.085050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.387 [2024-07-27 01:35:10.085083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.387 [2024-07-27 01:35:10.085099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.387 [2024-07-27 01:35:10.085112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.387 [2024-07-27 01:35:10.085141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-07-27 01:35:10.094916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.387 [2024-07-27 01:35:10.095068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.387 [2024-07-27 01:35:10.095094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.387 [2024-07-27 01:35:10.095109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.387 [2024-07-27 01:35:10.095123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.387 [2024-07-27 01:35:10.095153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-07-27 01:35:10.104969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.387 [2024-07-27 01:35:10.105125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.387 [2024-07-27 01:35:10.105160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.387 [2024-07-27 01:35:10.105176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.387 [2024-07-27 01:35:10.105190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.387 [2024-07-27 01:35:10.105219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-07-27 01:35:10.114997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.387 [2024-07-27 01:35:10.115176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.387 [2024-07-27 01:35:10.115201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.387 [2024-07-27 01:35:10.115217] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.387 [2024-07-27 01:35:10.115232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.387 [2024-07-27 01:35:10.115261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-07-27 01:35:10.125017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.387 [2024-07-27 01:35:10.125199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.387 [2024-07-27 01:35:10.125225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.387 [2024-07-27 01:35:10.125240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.387 [2024-07-27 01:35:10.125254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.387 [2024-07-27 01:35:10.125283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.387 [2024-07-27 01:35:10.135030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.387 [2024-07-27 01:35:10.135230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.387 [2024-07-27 01:35:10.135256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.387 [2024-07-27 01:35:10.135272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.387 [2024-07-27 01:35:10.135285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.387 [2024-07-27 01:35:10.135314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.387 qpair failed and we were unable to recover it. 00:27:18.646 [2024-07-27 01:35:10.145169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.646 [2024-07-27 01:35:10.145334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.646 [2024-07-27 01:35:10.145360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.646 [2024-07-27 01:35:10.145375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.646 [2024-07-27 01:35:10.145390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.646 [2024-07-27 01:35:10.145424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-07-27 01:35:10.155198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.646 [2024-07-27 01:35:10.155364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.646 [2024-07-27 01:35:10.155390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.646 [2024-07-27 01:35:10.155405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.646 [2024-07-27 01:35:10.155419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.646 [2024-07-27 01:35:10.155449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-07-27 01:35:10.165151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.646 [2024-07-27 01:35:10.165301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.646 [2024-07-27 01:35:10.165327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.646 [2024-07-27 01:35:10.165342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.646 [2024-07-27 01:35:10.165356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.646 [2024-07-27 01:35:10.165385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-07-27 01:35:10.175182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.646 [2024-07-27 01:35:10.175350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.646 [2024-07-27 01:35:10.175375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.646 [2024-07-27 01:35:10.175391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.646 [2024-07-27 01:35:10.175405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.646 [2024-07-27 01:35:10.175433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-07-27 01:35:10.185243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.646 [2024-07-27 01:35:10.185412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.646 [2024-07-27 01:35:10.185438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.646 [2024-07-27 01:35:10.185455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.646 [2024-07-27 01:35:10.185469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.646 [2024-07-27 01:35:10.185497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-07-27 01:35:10.195280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.646 [2024-07-27 01:35:10.195452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.646 [2024-07-27 01:35:10.195482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.646 [2024-07-27 01:35:10.195498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.646 [2024-07-27 01:35:10.195512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.646 [2024-07-27 01:35:10.195540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-07-27 01:35:10.205252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.646 [2024-07-27 01:35:10.205405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.646 [2024-07-27 01:35:10.205430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.646 [2024-07-27 01:35:10.205446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.646 [2024-07-27 01:35:10.205460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.646 [2024-07-27 01:35:10.205489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.646 qpair failed and we were unable to recover it. 00:27:18.646 [2024-07-27 01:35:10.215295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.646 [2024-07-27 01:35:10.215445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.646 [2024-07-27 01:35:10.215471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.646 [2024-07-27 01:35:10.215487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.646 [2024-07-27 01:35:10.215500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.646 [2024-07-27 01:35:10.215529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.225327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.225499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.225524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.225540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.225554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.225582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.235352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.235501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.235526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.235542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.235556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.235590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.245405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.245550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.245576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.245591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.245605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.245633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.255393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.255546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.255570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.255586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.255600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.255628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.265436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.265587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.265613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.265629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.265642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.265670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.275491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.275655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.275680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.275696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.275709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.275737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.285474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.285624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.285654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.285670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.285684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.285713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.295473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.295618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.295644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.295659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.295673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.295701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.305633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.305781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.305807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.305823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.305837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.305866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.315600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.315766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.315791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.315807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.315821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.315849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.325699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.325848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.325873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.325889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.325907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.325936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.335715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.335892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.335919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.335939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.335954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.335984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.345679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.647 [2024-07-27 01:35:10.345834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.647 [2024-07-27 01:35:10.345860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.647 [2024-07-27 01:35:10.345875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.647 [2024-07-27 01:35:10.345889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.647 [2024-07-27 01:35:10.345918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.647 qpair failed and we were unable to recover it. 00:27:18.647 [2024-07-27 01:35:10.355687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.648 [2024-07-27 01:35:10.355834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.648 [2024-07-27 01:35:10.355860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.648 [2024-07-27 01:35:10.355875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.648 [2024-07-27 01:35:10.355889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.648 [2024-07-27 01:35:10.355917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-07-27 01:35:10.365706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.648 [2024-07-27 01:35:10.365898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.648 [2024-07-27 01:35:10.365925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.648 [2024-07-27 01:35:10.365941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.648 [2024-07-27 01:35:10.365954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.648 [2024-07-27 01:35:10.365982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-07-27 01:35:10.375839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.648 [2024-07-27 01:35:10.376000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.648 [2024-07-27 01:35:10.376028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.648 [2024-07-27 01:35:10.376043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.648 [2024-07-27 01:35:10.376056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.648 [2024-07-27 01:35:10.376095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-07-27 01:35:10.385757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.648 [2024-07-27 01:35:10.385905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.648 [2024-07-27 01:35:10.385930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.648 [2024-07-27 01:35:10.385945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.648 [2024-07-27 01:35:10.385959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.648 [2024-07-27 01:35:10.385988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.648 [2024-07-27 01:35:10.395777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.648 [2024-07-27 01:35:10.395942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.648 [2024-07-27 01:35:10.395968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.648 [2024-07-27 01:35:10.395983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.648 [2024-07-27 01:35:10.395997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.648 [2024-07-27 01:35:10.396025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.648 qpair failed and we were unable to recover it. 00:27:18.907 [2024-07-27 01:35:10.405809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.907 [2024-07-27 01:35:10.405960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.907 [2024-07-27 01:35:10.405986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.907 [2024-07-27 01:35:10.406002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.907 [2024-07-27 01:35:10.406015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.907 [2024-07-27 01:35:10.406044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.907 qpair failed and we were unable to recover it. 00:27:18.907 [2024-07-27 01:35:10.415834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.907 [2024-07-27 01:35:10.416014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.907 [2024-07-27 01:35:10.416041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.907 [2024-07-27 01:35:10.416068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.907 [2024-07-27 01:35:10.416092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.907 [2024-07-27 01:35:10.416124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.907 qpair failed and we were unable to recover it. 00:27:18.907 [2024-07-27 01:35:10.425877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.907 [2024-07-27 01:35:10.426035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.907 [2024-07-27 01:35:10.426070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.907 [2024-07-27 01:35:10.426089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.907 [2024-07-27 01:35:10.426103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.907 [2024-07-27 01:35:10.426133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.907 qpair failed and we were unable to recover it. 00:27:18.907 [2024-07-27 01:35:10.435920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.907 [2024-07-27 01:35:10.436072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.907 [2024-07-27 01:35:10.436097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.907 [2024-07-27 01:35:10.436112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.907 [2024-07-27 01:35:10.436125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.907 [2024-07-27 01:35:10.436153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.907 qpair failed and we were unable to recover it. 00:27:18.907 [2024-07-27 01:35:10.445945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.907 [2024-07-27 01:35:10.446113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.907 [2024-07-27 01:35:10.446139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.907 [2024-07-27 01:35:10.446154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.907 [2024-07-27 01:35:10.446169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.907 [2024-07-27 01:35:10.446199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.907 qpair failed and we were unable to recover it. 00:27:18.907 [2024-07-27 01:35:10.455949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.907 [2024-07-27 01:35:10.456105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.907 [2024-07-27 01:35:10.456131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.907 [2024-07-27 01:35:10.456147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.907 [2024-07-27 01:35:10.456160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.907 [2024-07-27 01:35:10.456189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.907 qpair failed and we were unable to recover it. 00:27:18.907 [2024-07-27 01:35:10.466013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.907 [2024-07-27 01:35:10.466182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.907 [2024-07-27 01:35:10.466207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.907 [2024-07-27 01:35:10.466223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.907 [2024-07-27 01:35:10.466237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.907 [2024-07-27 01:35:10.466265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.907 qpair failed and we were unable to recover it. 00:27:18.907 [2024-07-27 01:35:10.476038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.907 [2024-07-27 01:35:10.476198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.907 [2024-07-27 01:35:10.476223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.907 [2024-07-27 01:35:10.476238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.907 [2024-07-27 01:35:10.476252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.907 [2024-07-27 01:35:10.476281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.907 qpair failed and we were unable to recover it. 00:27:18.907 [2024-07-27 01:35:10.486086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.907 [2024-07-27 01:35:10.486234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.486259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.486275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.486289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.486317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.496105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.496253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.496279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.496294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.496307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.496335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.506148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.506309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.506334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.506349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.506369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.506398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.516147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.516310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.516335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.516350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.516364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.516392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.526213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.526391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.526417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.526433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.526447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.526475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.536237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.536396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.536424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.536443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.536457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.536488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.546263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.546419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.546444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.546459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.546473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.546502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.556306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.556493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.556519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.556534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.556548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.556577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.566335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.566492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.566518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.566533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.566547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.566576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.576333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.576478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.576504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.576519] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.576533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.576561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.586373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.586525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.586551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.586567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.586581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.586610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.596549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.596714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.596739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.596754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.596773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.596803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.606415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.606565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.606590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.606606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.606620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.606648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.616482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.616628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.616653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.908 [2024-07-27 01:35:10.616669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.908 [2024-07-27 01:35:10.616682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.908 [2024-07-27 01:35:10.616711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.908 qpair failed and we were unable to recover it. 00:27:18.908 [2024-07-27 01:35:10.626526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.908 [2024-07-27 01:35:10.626717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.908 [2024-07-27 01:35:10.626743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.909 [2024-07-27 01:35:10.626758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.909 [2024-07-27 01:35:10.626772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.909 [2024-07-27 01:35:10.626800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.909 qpair failed and we were unable to recover it. 00:27:18.909 [2024-07-27 01:35:10.636536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.909 [2024-07-27 01:35:10.636682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.909 [2024-07-27 01:35:10.636708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.909 [2024-07-27 01:35:10.636724] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.909 [2024-07-27 01:35:10.636738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.909 [2024-07-27 01:35:10.636766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.909 qpair failed and we were unable to recover it. 00:27:18.909 [2024-07-27 01:35:10.646520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.909 [2024-07-27 01:35:10.646666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.909 [2024-07-27 01:35:10.646692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.909 [2024-07-27 01:35:10.646708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.909 [2024-07-27 01:35:10.646722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.909 [2024-07-27 01:35:10.646752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.909 qpair failed and we were unable to recover it. 00:27:18.909 [2024-07-27 01:35:10.656549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.909 [2024-07-27 01:35:10.656693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.909 [2024-07-27 01:35:10.656718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.909 [2024-07-27 01:35:10.656735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.909 [2024-07-27 01:35:10.656749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:18.909 [2024-07-27 01:35:10.656779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.909 qpair failed and we were unable to recover it. 00:27:19.168 [2024-07-27 01:35:10.666610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-07-27 01:35:10.666768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-07-27 01:35:10.666793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-07-27 01:35:10.666808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-07-27 01:35:10.666822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.168 [2024-07-27 01:35:10.666852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-07-27 01:35:10.676598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-07-27 01:35:10.676749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-07-27 01:35:10.676774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-07-27 01:35:10.676790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-07-27 01:35:10.676804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.168 [2024-07-27 01:35:10.676832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-07-27 01:35:10.686697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-07-27 01:35:10.686841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-07-27 01:35:10.686866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-07-27 01:35:10.686881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.168 [2024-07-27 01:35:10.686901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.168 [2024-07-27 01:35:10.686931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.168 qpair failed and we were unable to recover it. 00:27:19.168 [2024-07-27 01:35:10.696766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.168 [2024-07-27 01:35:10.696910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.168 [2024-07-27 01:35:10.696936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.168 [2024-07-27 01:35:10.696951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.696965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.696994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.706769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.706918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.706943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.706959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.706973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.707001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.716710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.716884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.716910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.716925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.716939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.716969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.726759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.726915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.726940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.726956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.726970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.726998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.736808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.736962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.736988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.737004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.737018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.737047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.746869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.747025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.747052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.747083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.747098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.747129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.756836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.756982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.757007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.757023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.757037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.757072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.766857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.767000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.767026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.767042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.767055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.767096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.776918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.777077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.777103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.777124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.777139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.777167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.786945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.787169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.787197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.787214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.787227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.787257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.797012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.797222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.797249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.797265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.797283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.797313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.806990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.807134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.807161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.807176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.807191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.807219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.817111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.817259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.817284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.817300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.817314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.817344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.169 qpair failed and we were unable to recover it. 00:27:19.169 [2024-07-27 01:35:10.827090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.169 [2024-07-27 01:35:10.827266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.169 [2024-07-27 01:35:10.827293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.169 [2024-07-27 01:35:10.827308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.169 [2024-07-27 01:35:10.827322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.169 [2024-07-27 01:35:10.827351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.170 [2024-07-27 01:35:10.837069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.170 [2024-07-27 01:35:10.837214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.170 [2024-07-27 01:35:10.837239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.170 [2024-07-27 01:35:10.837255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.170 [2024-07-27 01:35:10.837269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.170 [2024-07-27 01:35:10.837297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.170 [2024-07-27 01:35:10.847127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.170 [2024-07-27 01:35:10.847300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.170 [2024-07-27 01:35:10.847326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.170 [2024-07-27 01:35:10.847341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.170 [2024-07-27 01:35:10.847355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.170 [2024-07-27 01:35:10.847384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.170 [2024-07-27 01:35:10.857160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.170 [2024-07-27 01:35:10.857353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.170 [2024-07-27 01:35:10.857378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.170 [2024-07-27 01:35:10.857394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.170 [2024-07-27 01:35:10.857407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.170 [2024-07-27 01:35:10.857435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.170 [2024-07-27 01:35:10.867160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.170 [2024-07-27 01:35:10.867307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.170 [2024-07-27 01:35:10.867332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.170 [2024-07-27 01:35:10.867354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.170 [2024-07-27 01:35:10.867368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.170 [2024-07-27 01:35:10.867397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.170 [2024-07-27 01:35:10.877217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.170 [2024-07-27 01:35:10.877365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.170 [2024-07-27 01:35:10.877391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.170 [2024-07-27 01:35:10.877407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.170 [2024-07-27 01:35:10.877421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.170 [2024-07-27 01:35:10.877449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.170 [2024-07-27 01:35:10.887295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.170 [2024-07-27 01:35:10.887445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.170 [2024-07-27 01:35:10.887470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.170 [2024-07-27 01:35:10.887486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.170 [2024-07-27 01:35:10.887500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.170 [2024-07-27 01:35:10.887529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.170 [2024-07-27 01:35:10.897277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.170 [2024-07-27 01:35:10.897435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.170 [2024-07-27 01:35:10.897461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.170 [2024-07-27 01:35:10.897476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.170 [2024-07-27 01:35:10.897490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.170 [2024-07-27 01:35:10.897518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.170 [2024-07-27 01:35:10.907290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.170 [2024-07-27 01:35:10.907489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.170 [2024-07-27 01:35:10.907513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.170 [2024-07-27 01:35:10.907529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.170 [2024-07-27 01:35:10.907543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.170 [2024-07-27 01:35:10.907571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.170 [2024-07-27 01:35:10.917294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.170 [2024-07-27 01:35:10.917441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.170 [2024-07-27 01:35:10.917467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.170 [2024-07-27 01:35:10.917482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.170 [2024-07-27 01:35:10.917496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.170 [2024-07-27 01:35:10.917526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.170 qpair failed and we were unable to recover it. 00:27:19.432 [2024-07-27 01:35:10.927314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.432 [2024-07-27 01:35:10.927459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.432 [2024-07-27 01:35:10.927484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.432 [2024-07-27 01:35:10.927499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.432 [2024-07-27 01:35:10.927513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.432 [2024-07-27 01:35:10.927541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.432 qpair failed and we were unable to recover it. 00:27:19.432 [2024-07-27 01:35:10.937377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.432 [2024-07-27 01:35:10.937521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.432 [2024-07-27 01:35:10.937546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.432 [2024-07-27 01:35:10.937562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.432 [2024-07-27 01:35:10.937576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.432 [2024-07-27 01:35:10.937605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.432 qpair failed and we were unable to recover it. 00:27:19.432 [2024-07-27 01:35:10.947421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.432 [2024-07-27 01:35:10.947618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.432 [2024-07-27 01:35:10.947644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.432 [2024-07-27 01:35:10.947660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.432 [2024-07-27 01:35:10.947674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.432 [2024-07-27 01:35:10.947703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.432 qpair failed and we were unable to recover it. 00:27:19.432 [2024-07-27 01:35:10.957436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.432 [2024-07-27 01:35:10.957601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.432 [2024-07-27 01:35:10.957626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.432 [2024-07-27 01:35:10.957649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.432 [2024-07-27 01:35:10.957664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:10.957693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:10.967463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:10.967608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:10.967633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:10.967649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:10.967663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:10.967694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:10.977484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:10.977628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:10.977653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:10.977669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:10.977683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:10.977711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:10.987555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:10.987705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:10.987731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:10.987747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:10.987760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:10.987789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:10.997600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:10.997760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:10.997786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:10.997803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:10.997816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:10.997845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.007613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:11.007787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:11.007813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:11.007829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:11.007843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:11.007872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.017568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:11.017728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:11.017753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:11.017769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:11.017783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:11.017812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.027640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:11.027831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:11.027856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:11.027872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:11.027886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:11.027915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.037685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:11.037850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:11.037875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:11.037891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:11.037906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:11.037934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.047705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:11.047890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:11.047917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:11.047953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:11.047967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:11.048012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.057687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:11.057831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:11.057858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:11.057875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:11.057889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:11.057917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.067742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:11.067895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:11.067922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:11.067939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:11.067968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:11.067998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.077794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:11.077986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:11.078013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:11.078029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:11.078043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:11.078078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.087764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.433 [2024-07-27 01:35:11.087919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.433 [2024-07-27 01:35:11.087945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.433 [2024-07-27 01:35:11.087962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.433 [2024-07-27 01:35:11.087976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.433 [2024-07-27 01:35:11.088005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.433 qpair failed and we were unable to recover it. 00:27:19.433 [2024-07-27 01:35:11.097803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.097957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.097984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.098001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.098014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.098065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.434 [2024-07-27 01:35:11.107905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.108081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.108118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.108136] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.108151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.108182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.434 [2024-07-27 01:35:11.117866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.118008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.118035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.118052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.118073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.118103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.434 [2024-07-27 01:35:11.127891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.128043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.128077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.128095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.128111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.128140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.434 [2024-07-27 01:35:11.137913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.138053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.138086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.138113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.138129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.138159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.434 [2024-07-27 01:35:11.147960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.148158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.148185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.148202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.148217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.148247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.434 [2024-07-27 01:35:11.158049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.158244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.158271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.158288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.158302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.158337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.434 [2024-07-27 01:35:11.168041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.168197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.168223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.168241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.168255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.168284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.434 [2024-07-27 01:35:11.178066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.178246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.178273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.178290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.178305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.178335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.434 [2024-07-27 01:35:11.188088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.434 [2024-07-27 01:35:11.188278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.434 [2024-07-27 01:35:11.188305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.434 [2024-07-27 01:35:11.188321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.434 [2024-07-27 01:35:11.188335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.434 [2024-07-27 01:35:11.188363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.434 qpair failed and we were unable to recover it. 00:27:19.696 [2024-07-27 01:35:11.198154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.696 [2024-07-27 01:35:11.198303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.696 [2024-07-27 01:35:11.198329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.696 [2024-07-27 01:35:11.198347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.696 [2024-07-27 01:35:11.198362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.696 [2024-07-27 01:35:11.198407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.696 qpair failed and we were unable to recover it. 00:27:19.696 [2024-07-27 01:35:11.208146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.696 [2024-07-27 01:35:11.208286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.696 [2024-07-27 01:35:11.208313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.696 [2024-07-27 01:35:11.208330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.696 [2024-07-27 01:35:11.208345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.696 [2024-07-27 01:35:11.208374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.696 qpair failed and we were unable to recover it. 00:27:19.696 [2024-07-27 01:35:11.218151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.696 [2024-07-27 01:35:11.218299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.696 [2024-07-27 01:35:11.218326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.696 [2024-07-27 01:35:11.218342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.696 [2024-07-27 01:35:11.218356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.696 [2024-07-27 01:35:11.218385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.696 qpair failed and we were unable to recover it. 00:27:19.696 [2024-07-27 01:35:11.228230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.696 [2024-07-27 01:35:11.228379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.696 [2024-07-27 01:35:11.228406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.696 [2024-07-27 01:35:11.228428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.696 [2024-07-27 01:35:11.228444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.696 [2024-07-27 01:35:11.228489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.696 qpair failed and we were unable to recover it. 00:27:19.696 [2024-07-27 01:35:11.238350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.696 [2024-07-27 01:35:11.238534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.696 [2024-07-27 01:35:11.238561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.696 [2024-07-27 01:35:11.238578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.696 [2024-07-27 01:35:11.238593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.696 [2024-07-27 01:35:11.238622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.696 qpair failed and we were unable to recover it. 00:27:19.696 [2024-07-27 01:35:11.248279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.696 [2024-07-27 01:35:11.248433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.696 [2024-07-27 01:35:11.248461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.696 [2024-07-27 01:35:11.248478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.696 [2024-07-27 01:35:11.248507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.696 [2024-07-27 01:35:11.248537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.696 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.258292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.258451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.258478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.258495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.258524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.258553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.268307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.268471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.268498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.268515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.268529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.268559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.278436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.278602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.278629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.278660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.278675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.278703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.288357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.288506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.288533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.288551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.288565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.288594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.298387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.298552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.298576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.298598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.298612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.298641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.308432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.308586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.308613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.308630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.308645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.308675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.318490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.318651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.318685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.318704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.318719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.318749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.328467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.328614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.328641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.328658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.328673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.328703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.338541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.338717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.338745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.338762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.338780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.338826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.348523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.348681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.348708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.348725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.348739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.348769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.358590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.358734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.358761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.358778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.358792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.358822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.368602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.368795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.368822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.368839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.368853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.368882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.378604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.697 [2024-07-27 01:35:11.378745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.697 [2024-07-27 01:35:11.378771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.697 [2024-07-27 01:35:11.378788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.697 [2024-07-27 01:35:11.378802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.697 [2024-07-27 01:35:11.378831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.697 qpair failed and we were unable to recover it. 00:27:19.697 [2024-07-27 01:35:11.388731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.698 [2024-07-27 01:35:11.388881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.698 [2024-07-27 01:35:11.388907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.698 [2024-07-27 01:35:11.388924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.698 [2024-07-27 01:35:11.388939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.698 [2024-07-27 01:35:11.388967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.698 qpair failed and we were unable to recover it. 00:27:19.698 [2024-07-27 01:35:11.398682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.698 [2024-07-27 01:35:11.398836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.698 [2024-07-27 01:35:11.398863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.698 [2024-07-27 01:35:11.398880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.698 [2024-07-27 01:35:11.398895] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.698 [2024-07-27 01:35:11.398925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.698 qpair failed and we were unable to recover it. 00:27:19.698 [2024-07-27 01:35:11.408701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.698 [2024-07-27 01:35:11.408894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.698 [2024-07-27 01:35:11.408926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.698 [2024-07-27 01:35:11.408944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.698 [2024-07-27 01:35:11.408958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.698 [2024-07-27 01:35:11.408987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.698 qpair failed and we were unable to recover it. 00:27:19.698 [2024-07-27 01:35:11.418751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.698 [2024-07-27 01:35:11.418900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.698 [2024-07-27 01:35:11.418927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.698 [2024-07-27 01:35:11.418944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.698 [2024-07-27 01:35:11.418959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.698 [2024-07-27 01:35:11.418988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.698 qpair failed and we were unable to recover it. 00:27:19.698 [2024-07-27 01:35:11.428750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.698 [2024-07-27 01:35:11.428904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.698 [2024-07-27 01:35:11.428931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.698 [2024-07-27 01:35:11.428948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.698 [2024-07-27 01:35:11.428962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.698 [2024-07-27 01:35:11.428991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.698 qpair failed and we were unable to recover it. 00:27:19.698 [2024-07-27 01:35:11.438831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.698 [2024-07-27 01:35:11.438988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.698 [2024-07-27 01:35:11.439012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.698 [2024-07-27 01:35:11.439028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.698 [2024-07-27 01:35:11.439042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.698 [2024-07-27 01:35:11.439077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.698 qpair failed and we were unable to recover it. 00:27:19.698 [2024-07-27 01:35:11.448805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.698 [2024-07-27 01:35:11.448950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.698 [2024-07-27 01:35:11.448977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.698 [2024-07-27 01:35:11.448994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.698 [2024-07-27 01:35:11.449008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.698 [2024-07-27 01:35:11.449038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.698 qpair failed and we were unable to recover it. 00:27:19.960 [2024-07-27 01:35:11.458873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.960 [2024-07-27 01:35:11.459020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.960 [2024-07-27 01:35:11.459048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.960 [2024-07-27 01:35:11.459071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.960 [2024-07-27 01:35:11.459086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.960 [2024-07-27 01:35:11.459116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-07-27 01:35:11.468873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.960 [2024-07-27 01:35:11.469065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.960 [2024-07-27 01:35:11.469091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.960 [2024-07-27 01:35:11.469107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.960 [2024-07-27 01:35:11.469122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.960 [2024-07-27 01:35:11.469151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-07-27 01:35:11.478887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.960 [2024-07-27 01:35:11.479076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.960 [2024-07-27 01:35:11.479110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.960 [2024-07-27 01:35:11.479126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.960 [2024-07-27 01:35:11.479140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.960 [2024-07-27 01:35:11.479172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-07-27 01:35:11.488909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.960 [2024-07-27 01:35:11.489052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.960 [2024-07-27 01:35:11.489084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.960 [2024-07-27 01:35:11.489101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.960 [2024-07-27 01:35:11.489115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.960 [2024-07-27 01:35:11.489146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.960 qpair failed and we were unable to recover it. 00:27:19.960 [2024-07-27 01:35:11.498949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.499101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.499133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.499152] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.499166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.499198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.509000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.509150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.509176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.509192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.509206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.509237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.519014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.519208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.519234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.519251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.519265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.519295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.529049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.529257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.529283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.529299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.529313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.529342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.539055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.539213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.539239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.539256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.539270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.539305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.549113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.549287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.549313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.549330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.549344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.549373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.559154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.559320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.559347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.559363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.559378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.559409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.569151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.569298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.569325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.569341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.569356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.569387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.579189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.579337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.579364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.579380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.579394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.579425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.589209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.589403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.589434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.589451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.589466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.589495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.599222] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.599371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.599396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.599412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.599425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.599454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.609300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.609450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.609476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.609492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.609506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.609534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.619345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.619500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.961 [2024-07-27 01:35:11.619527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.961 [2024-07-27 01:35:11.619545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.961 [2024-07-27 01:35:11.619559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.961 [2024-07-27 01:35:11.619590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.961 qpair failed and we were unable to recover it. 00:27:19.961 [2024-07-27 01:35:11.629326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.961 [2024-07-27 01:35:11.629525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.962 [2024-07-27 01:35:11.629551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.962 [2024-07-27 01:35:11.629567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.962 [2024-07-27 01:35:11.629581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.962 [2024-07-27 01:35:11.629616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-07-27 01:35:11.639331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.962 [2024-07-27 01:35:11.639476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.962 [2024-07-27 01:35:11.639501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.962 [2024-07-27 01:35:11.639517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.962 [2024-07-27 01:35:11.639532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.962 [2024-07-27 01:35:11.639560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-07-27 01:35:11.649366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.962 [2024-07-27 01:35:11.649514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.962 [2024-07-27 01:35:11.649539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.962 [2024-07-27 01:35:11.649555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.962 [2024-07-27 01:35:11.649569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.962 [2024-07-27 01:35:11.649598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-07-27 01:35:11.659386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.962 [2024-07-27 01:35:11.659538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.962 [2024-07-27 01:35:11.659563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.962 [2024-07-27 01:35:11.659579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.962 [2024-07-27 01:35:11.659594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.962 [2024-07-27 01:35:11.659622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-07-27 01:35:11.669474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.962 [2024-07-27 01:35:11.669647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.962 [2024-07-27 01:35:11.669674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.962 [2024-07-27 01:35:11.669690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.962 [2024-07-27 01:35:11.669704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.962 [2024-07-27 01:35:11.669733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-07-27 01:35:11.679468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.962 [2024-07-27 01:35:11.679613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.962 [2024-07-27 01:35:11.679647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.962 [2024-07-27 01:35:11.679663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.962 [2024-07-27 01:35:11.679677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.962 [2024-07-27 01:35:11.679708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-07-27 01:35:11.689515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.962 [2024-07-27 01:35:11.689673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.962 [2024-07-27 01:35:11.689698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.962 [2024-07-27 01:35:11.689713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.962 [2024-07-27 01:35:11.689727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.962 [2024-07-27 01:35:11.689755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-07-27 01:35:11.699541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.962 [2024-07-27 01:35:11.699687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.962 [2024-07-27 01:35:11.699712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.962 [2024-07-27 01:35:11.699728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.962 [2024-07-27 01:35:11.699742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.962 [2024-07-27 01:35:11.699770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.962 qpair failed and we were unable to recover it. 00:27:19.962 [2024-07-27 01:35:11.709555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.962 [2024-07-27 01:35:11.709704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.962 [2024-07-27 01:35:11.709730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.962 [2024-07-27 01:35:11.709746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.962 [2024-07-27 01:35:11.709759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:19.962 [2024-07-27 01:35:11.709787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.962 qpair failed and we were unable to recover it. 00:27:20.224 [2024-07-27 01:35:11.719598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.224 [2024-07-27 01:35:11.719746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.224 [2024-07-27 01:35:11.719771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.224 [2024-07-27 01:35:11.719787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.224 [2024-07-27 01:35:11.719800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.224 [2024-07-27 01:35:11.719835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.224 qpair failed and we were unable to recover it. 00:27:20.224 [2024-07-27 01:35:11.729628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.224 [2024-07-27 01:35:11.729786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.224 [2024-07-27 01:35:11.729811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.224 [2024-07-27 01:35:11.729827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.224 [2024-07-27 01:35:11.729840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.224 [2024-07-27 01:35:11.729869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.224 qpair failed and we were unable to recover it. 00:27:20.224 [2024-07-27 01:35:11.739651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.224 [2024-07-27 01:35:11.739829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.224 [2024-07-27 01:35:11.739854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.224 [2024-07-27 01:35:11.739869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.224 [2024-07-27 01:35:11.739883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.224 [2024-07-27 01:35:11.739911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.224 qpair failed and we were unable to recover it. 00:27:20.224 [2024-07-27 01:35:11.749688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.224 [2024-07-27 01:35:11.749837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.224 [2024-07-27 01:35:11.749863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.224 [2024-07-27 01:35:11.749878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.224 [2024-07-27 01:35:11.749892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.224 [2024-07-27 01:35:11.749919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.224 qpair failed and we were unable to recover it. 00:27:20.224 [2024-07-27 01:35:11.759698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.224 [2024-07-27 01:35:11.759842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.224 [2024-07-27 01:35:11.759868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.224 [2024-07-27 01:35:11.759884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.224 [2024-07-27 01:35:11.759898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.224 [2024-07-27 01:35:11.759926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.224 qpair failed and we were unable to recover it. 00:27:20.224 [2024-07-27 01:35:11.769727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.224 [2024-07-27 01:35:11.769868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.224 [2024-07-27 01:35:11.769898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.224 [2024-07-27 01:35:11.769914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.224 [2024-07-27 01:35:11.769928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.224 [2024-07-27 01:35:11.769956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.224 qpair failed and we were unable to recover it. 00:27:20.224 [2024-07-27 01:35:11.779740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.224 [2024-07-27 01:35:11.779896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.224 [2024-07-27 01:35:11.779922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.224 [2024-07-27 01:35:11.779937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.224 [2024-07-27 01:35:11.779951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.224 [2024-07-27 01:35:11.779980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.224 qpair failed and we were unable to recover it. 00:27:20.224 [2024-07-27 01:35:11.789803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.224 [2024-07-27 01:35:11.789955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.224 [2024-07-27 01:35:11.789980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.224 [2024-07-27 01:35:11.789996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.224 [2024-07-27 01:35:11.790010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.224 [2024-07-27 01:35:11.790038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.224 qpair failed and we were unable to recover it. 00:27:20.224 [2024-07-27 01:35:11.799802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.224 [2024-07-27 01:35:11.799947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.224 [2024-07-27 01:35:11.799972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.224 [2024-07-27 01:35:11.799987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.224 [2024-07-27 01:35:11.800001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.800029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.809882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.810033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.810067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.810086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.810101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.810135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.819886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.820033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.820064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.820082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.820097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.820126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.829914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.830069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.830094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.830109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.830123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.830152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.839910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.840057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.840088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.840103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.840117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.840146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.849937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.850105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.850132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.850147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.850161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.850190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.859987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.860153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.860184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.860200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.860214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.860243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.870034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.870222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.870248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.870263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.870277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.870305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.880077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.880276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.880301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.880317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.880331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.880360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.890057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.890209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.890234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.890249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.890263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.890291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.900079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.900228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.900253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.900269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.900282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.900317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.910115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.910266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.910291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.910306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.910320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.910350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.920164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.920352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.920377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.920393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.920407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.920435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.930168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.225 [2024-07-27 01:35:11.930314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.225 [2024-07-27 01:35:11.930340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.225 [2024-07-27 01:35:11.930355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.225 [2024-07-27 01:35:11.930369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.225 [2024-07-27 01:35:11.930397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.225 qpair failed and we were unable to recover it. 00:27:20.225 [2024-07-27 01:35:11.940224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.226 [2024-07-27 01:35:11.940368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.226 [2024-07-27 01:35:11.940394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.226 [2024-07-27 01:35:11.940409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.226 [2024-07-27 01:35:11.940423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.226 [2024-07-27 01:35:11.940451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.226 qpair failed and we were unable to recover it. 00:27:20.226 [2024-07-27 01:35:11.950386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.226 [2024-07-27 01:35:11.950540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.226 [2024-07-27 01:35:11.950571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.226 [2024-07-27 01:35:11.950587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.226 [2024-07-27 01:35:11.950601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.226 [2024-07-27 01:35:11.950629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.226 qpair failed and we were unable to recover it. 00:27:20.226 [2024-07-27 01:35:11.960273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.226 [2024-07-27 01:35:11.960427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.226 [2024-07-27 01:35:11.960452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.226 [2024-07-27 01:35:11.960468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.226 [2024-07-27 01:35:11.960482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.226 [2024-07-27 01:35:11.960509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.226 qpair failed and we were unable to recover it. 00:27:20.226 [2024-07-27 01:35:11.970308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.226 [2024-07-27 01:35:11.970457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.226 [2024-07-27 01:35:11.970483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.226 [2024-07-27 01:35:11.970498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.226 [2024-07-27 01:35:11.970512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.226 [2024-07-27 01:35:11.970540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.226 qpair failed and we were unable to recover it. 00:27:20.486 [2024-07-27 01:35:11.980314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.486 [2024-07-27 01:35:11.980460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.486 [2024-07-27 01:35:11.980486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.486 [2024-07-27 01:35:11.980502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.486 [2024-07-27 01:35:11.980516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.486 [2024-07-27 01:35:11.980545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.486 qpair failed and we were unable to recover it. 00:27:20.486 [2024-07-27 01:35:11.990409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.486 [2024-07-27 01:35:11.990594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.486 [2024-07-27 01:35:11.990620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.486 [2024-07-27 01:35:11.990636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.486 [2024-07-27 01:35:11.990655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.486 [2024-07-27 01:35:11.990685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.486 qpair failed and we were unable to recover it. 00:27:20.486 [2024-07-27 01:35:12.000371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.486 [2024-07-27 01:35:12.000520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.486 [2024-07-27 01:35:12.000545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.486 [2024-07-27 01:35:12.000560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.486 [2024-07-27 01:35:12.000574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.486 [2024-07-27 01:35:12.000603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.486 qpair failed and we were unable to recover it. 00:27:20.486 [2024-07-27 01:35:12.010398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.486 [2024-07-27 01:35:12.010541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.486 [2024-07-27 01:35:12.010566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.486 [2024-07-27 01:35:12.010581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.486 [2024-07-27 01:35:12.010595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.486 [2024-07-27 01:35:12.010623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.486 qpair failed and we were unable to recover it. 00:27:20.486 [2024-07-27 01:35:12.020504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.486 [2024-07-27 01:35:12.020648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.486 [2024-07-27 01:35:12.020673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.486 [2024-07-27 01:35:12.020689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.486 [2024-07-27 01:35:12.020703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.486 [2024-07-27 01:35:12.020731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.486 qpair failed and we were unable to recover it. 00:27:20.486 [2024-07-27 01:35:12.030536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.030716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.030741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.030757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.030770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.030798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.040517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.040670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.040695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.040710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.040724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.040752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.050517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.050661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.050687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.050702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.050716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.050744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.060604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.060748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.060773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.060790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.060804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.060832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.070592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.070777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.070802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.070818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.070832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.070859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.080633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.080802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.080827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.080843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.080862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.080892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.090651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.090802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.090827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.090843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.090856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.090884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.100653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.100792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.100817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.100832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.100847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.100876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.110696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.110845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.110871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.110887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.110900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.110929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.120756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.120907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.120943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.120958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.120972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.121001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.130776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.130922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.130947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.130962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.130976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.131003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.140839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.140983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.141008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.141023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.141036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.141071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.150913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.151071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.151097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.487 [2024-07-27 01:35:12.151112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.487 [2024-07-27 01:35:12.151125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.487 [2024-07-27 01:35:12.151153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.487 qpair failed and we were unable to recover it. 00:27:20.487 [2024-07-27 01:35:12.160850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.487 [2024-07-27 01:35:12.161022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.487 [2024-07-27 01:35:12.161050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.488 [2024-07-27 01:35:12.161079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.488 [2024-07-27 01:35:12.161095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.488 [2024-07-27 01:35:12.161125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.488 qpair failed and we were unable to recover it. 00:27:20.488 [2024-07-27 01:35:12.170895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.488 [2024-07-27 01:35:12.171039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.488 [2024-07-27 01:35:12.171079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.488 [2024-07-27 01:35:12.171095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.488 [2024-07-27 01:35:12.171126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.488 [2024-07-27 01:35:12.171157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.488 qpair failed and we were unable to recover it. 00:27:20.488 [2024-07-27 01:35:12.180883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.488 [2024-07-27 01:35:12.181035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.488 [2024-07-27 01:35:12.181066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.488 [2024-07-27 01:35:12.181084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.488 [2024-07-27 01:35:12.181099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.488 [2024-07-27 01:35:12.181128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.488 qpair failed and we were unable to recover it. 00:27:20.488 [2024-07-27 01:35:12.190944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.488 [2024-07-27 01:35:12.191105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.488 [2024-07-27 01:35:12.191131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.488 [2024-07-27 01:35:12.191148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.488 [2024-07-27 01:35:12.191162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.488 [2024-07-27 01:35:12.191192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.488 qpair failed and we were unable to recover it. 00:27:20.488 [2024-07-27 01:35:12.200960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.488 [2024-07-27 01:35:12.201108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.488 [2024-07-27 01:35:12.201134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.488 [2024-07-27 01:35:12.201150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.488 [2024-07-27 01:35:12.201164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.488 [2024-07-27 01:35:12.201193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.488 qpair failed and we were unable to recover it. 00:27:20.488 [2024-07-27 01:35:12.210986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.488 [2024-07-27 01:35:12.211149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.488 [2024-07-27 01:35:12.211175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.488 [2024-07-27 01:35:12.211190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.488 [2024-07-27 01:35:12.211204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.488 [2024-07-27 01:35:12.211235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.488 qpair failed and we were unable to recover it. 00:27:20.488 [2024-07-27 01:35:12.221130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.488 [2024-07-27 01:35:12.221297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.488 [2024-07-27 01:35:12.221322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.488 [2024-07-27 01:35:12.221338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.488 [2024-07-27 01:35:12.221351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.488 [2024-07-27 01:35:12.221380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.488 qpair failed and we were unable to recover it. 00:27:20.488 [2024-07-27 01:35:12.231055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.488 [2024-07-27 01:35:12.231225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.488 [2024-07-27 01:35:12.231250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.488 [2024-07-27 01:35:12.231266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.488 [2024-07-27 01:35:12.231279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.488 [2024-07-27 01:35:12.231308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.488 qpair failed and we were unable to recover it. 00:27:20.488 [2024-07-27 01:35:12.241070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.488 [2024-07-27 01:35:12.241236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.488 [2024-07-27 01:35:12.241261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.488 [2024-07-27 01:35:12.241277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.488 [2024-07-27 01:35:12.241291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.488 [2024-07-27 01:35:12.241321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.488 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.251099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.251263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.251289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.251305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.251319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.251351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.261155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.261333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.261359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.261375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.261394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.261423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.271165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.271362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.271388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.271406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.271420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.271450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.281224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.281376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.281402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.281417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.281431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.281459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.291223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.291377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.291403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.291418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.291432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.291461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.301281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.301429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.301455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.301470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.301483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.301512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.311305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.311457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.311482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.311497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.311511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.311539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.321312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.321457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.321482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.321498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.321512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.321540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.331318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.331466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.331491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.331506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.331520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.331549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.341352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.341495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.341520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.341535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.341548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.341577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.351414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.351573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.351598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.748 [2024-07-27 01:35:12.351613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.748 [2024-07-27 01:35:12.351632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.748 [2024-07-27 01:35:12.351662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.748 qpair failed and we were unable to recover it. 00:27:20.748 [2024-07-27 01:35:12.361471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.748 [2024-07-27 01:35:12.361618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.748 [2024-07-27 01:35:12.361643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.361659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.361673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.361702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.371416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.371558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.371583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.371598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.371612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.371640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.381438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.381588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.381614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.381630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.381643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.381670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.391539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.391696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.391723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.391743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.391757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.391787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.401528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.401675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.401701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.401717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.401731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.401761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.411657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.411838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.411864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.411880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.411894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.411923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.421674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.421843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.421871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.421887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.421901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.421931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.431628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.431778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.431803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.431818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.431832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.431861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.441612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.441760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.441784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.441804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.441818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.441846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.451660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.451816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.451841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.451857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.451871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.451901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.461713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.461864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.461889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.461905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.461919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.461947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.471743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.471894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.471920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.471935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.471948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.471976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.481758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.481910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.481936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.481952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.749 [2024-07-27 01:35:12.481966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.749 [2024-07-27 01:35:12.481995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.749 qpair failed and we were unable to recover it. 00:27:20.749 [2024-07-27 01:35:12.491867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.749 [2024-07-27 01:35:12.492021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.749 [2024-07-27 01:35:12.492047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.749 [2024-07-27 01:35:12.492072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.750 [2024-07-27 01:35:12.492090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.750 [2024-07-27 01:35:12.492119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.750 qpair failed and we were unable to recover it. 00:27:20.750 [2024-07-27 01:35:12.501802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.750 [2024-07-27 01:35:12.501950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.750 [2024-07-27 01:35:12.501976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.750 [2024-07-27 01:35:12.501991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.750 [2024-07-27 01:35:12.502005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:20.750 [2024-07-27 01:35:12.502033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.750 qpair failed and we were unable to recover it. 00:27:21.009 [2024-07-27 01:35:12.511876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.009 [2024-07-27 01:35:12.512031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.009 [2024-07-27 01:35:12.512056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.009 [2024-07-27 01:35:12.512081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.009 [2024-07-27 01:35:12.512095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.009 [2024-07-27 01:35:12.512125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.009 qpair failed and we were unable to recover it. 00:27:21.009 [2024-07-27 01:35:12.521866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.009 [2024-07-27 01:35:12.522014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.009 [2024-07-27 01:35:12.522039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.009 [2024-07-27 01:35:12.522055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.009 [2024-07-27 01:35:12.522076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.009 [2024-07-27 01:35:12.522106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.009 qpair failed and we were unable to recover it. 00:27:21.009 [2024-07-27 01:35:12.531894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.009 [2024-07-27 01:35:12.532039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.009 [2024-07-27 01:35:12.532070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.009 [2024-07-27 01:35:12.532096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.009 [2024-07-27 01:35:12.532112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.009 [2024-07-27 01:35:12.532140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.009 qpair failed and we were unable to recover it. 00:27:21.009 [2024-07-27 01:35:12.541912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.009 [2024-07-27 01:35:12.542055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.009 [2024-07-27 01:35:12.542087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.009 [2024-07-27 01:35:12.542102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.009 [2024-07-27 01:35:12.542116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.009 [2024-07-27 01:35:12.542144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.009 qpair failed and we were unable to recover it. 00:27:21.009 [2024-07-27 01:35:12.551962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.009 [2024-07-27 01:35:12.552116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.009 [2024-07-27 01:35:12.552142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.552157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.552171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.552200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.562009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.562173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.562198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.562214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.562227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.562256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.572025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.572181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.572208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.572223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.572237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.572266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.582123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.582270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.582295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.582311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.582325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.582353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.592175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.592327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.592353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.592368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.592382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.592411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.602117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.602264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.602289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.602304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.602317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.602346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.612160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.612326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.612352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.612367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.612381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.612409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.622154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.622294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.622319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.622341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.622355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.622385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.632227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.632377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.632403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.632418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.632432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.632461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.642221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.642371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.642397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.642413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.642426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.642455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.652271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.652469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.652495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.652510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.652524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.652552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.662289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.662433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.662458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.662473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.662487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.662516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.672374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.672528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.672553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.672571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.672585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.010 [2024-07-27 01:35:12.672613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.010 qpair failed and we were unable to recover it. 00:27:21.010 [2024-07-27 01:35:12.682367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.010 [2024-07-27 01:35:12.682516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.010 [2024-07-27 01:35:12.682542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.010 [2024-07-27 01:35:12.682557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.010 [2024-07-27 01:35:12.682571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.011 [2024-07-27 01:35:12.682599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.011 qpair failed and we were unable to recover it. 00:27:21.011 [2024-07-27 01:35:12.692423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.011 [2024-07-27 01:35:12.692629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.011 [2024-07-27 01:35:12.692655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.011 [2024-07-27 01:35:12.692671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.011 [2024-07-27 01:35:12.692685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.011 [2024-07-27 01:35:12.692714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.011 qpair failed and we were unable to recover it. 00:27:21.011 [2024-07-27 01:35:12.702404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.011 [2024-07-27 01:35:12.702553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.011 [2024-07-27 01:35:12.702579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.011 [2024-07-27 01:35:12.702594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.011 [2024-07-27 01:35:12.702607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.011 [2024-07-27 01:35:12.702636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.011 qpair failed and we were unable to recover it. 00:27:21.011 [2024-07-27 01:35:12.712441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.011 [2024-07-27 01:35:12.712602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.011 [2024-07-27 01:35:12.712637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.011 [2024-07-27 01:35:12.712659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.011 [2024-07-27 01:35:12.712674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.011 [2024-07-27 01:35:12.712702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.011 qpair failed and we were unable to recover it. 00:27:21.011 [2024-07-27 01:35:12.722499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.011 [2024-07-27 01:35:12.722660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.011 [2024-07-27 01:35:12.722685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.011 [2024-07-27 01:35:12.722701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.011 [2024-07-27 01:35:12.722715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9429f0 00:27:21.011 [2024-07-27 01:35:12.722744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.011 qpair failed and we were unable to recover it. 00:27:21.011 [2024-07-27 01:35:12.732531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.011 [2024-07-27 01:35:12.732675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.011 [2024-07-27 01:35:12.732709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.011 [2024-07-27 01:35:12.732726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.011 [2024-07-27 01:35:12.732741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fddd8000b90 00:27:21.011 [2024-07-27 01:35:12.732774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:21.011 qpair failed and we were unable to recover it. 00:27:21.011 [2024-07-27 01:35:12.742564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.011 [2024-07-27 01:35:12.742708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.011 [2024-07-27 01:35:12.742736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.011 [2024-07-27 01:35:12.742752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.011 [2024-07-27 01:35:12.742766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fddd8000b90 00:27:21.011 [2024-07-27 01:35:12.742798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:21.011 qpair failed and we were unable to recover it. 00:27:21.011 [2024-07-27 01:35:12.742893] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:21.011 A controller has encountered a failure and is being reset. 00:27:21.011 qpair failed and we were unable to recover it. 00:27:21.011 qpair failed and we were unable to recover it. 00:27:21.271 Controller properly reset. 00:27:21.272 Initializing NVMe Controllers 00:27:21.272 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:21.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:21.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:21.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:21.272 Initialization complete. Launching workers. 00:27:21.272 Starting thread on core 1 00:27:21.272 Starting thread on core 2 00:27:21.272 Starting thread on core 3 00:27:21.272 Starting thread on core 0 00:27:21.272 01:35:12 -- host/target_disconnect.sh@59 -- # sync 00:27:21.272 00:27:21.272 real 0m11.668s 00:27:21.272 user 0m19.694s 00:27:21.272 sys 0m5.690s 00:27:21.272 01:35:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.272 01:35:12 -- common/autotest_common.sh@10 -- # set +x 00:27:21.272 ************************************ 00:27:21.272 END TEST nvmf_target_disconnect_tc2 00:27:21.272 ************************************ 00:27:21.272 01:35:12 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:27:21.272 01:35:12 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:21.272 01:35:12 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:27:21.272 01:35:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:21.272 01:35:12 -- nvmf/common.sh@116 -- # sync 00:27:21.272 01:35:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:21.272 01:35:12 -- nvmf/common.sh@119 -- # set +e 00:27:21.272 01:35:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:21.272 01:35:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:21.272 rmmod nvme_tcp 00:27:21.272 rmmod nvme_fabrics 00:27:21.272 rmmod nvme_keyring 00:27:21.272 01:35:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:21.272 01:35:12 -- nvmf/common.sh@123 -- # set -e 00:27:21.272 01:35:12 -- nvmf/common.sh@124 -- # return 0 00:27:21.272 01:35:12 -- nvmf/common.sh@477 -- # '[' -n 754843 ']' 00:27:21.272 01:35:12 -- nvmf/common.sh@478 -- # killprocess 754843 00:27:21.272 01:35:12 -- common/autotest_common.sh@926 -- # '[' -z 754843 ']' 00:27:21.272 01:35:12 -- common/autotest_common.sh@930 -- # kill -0 754843 00:27:21.272 01:35:12 -- common/autotest_common.sh@931 -- # uname 00:27:21.272 01:35:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:21.272 01:35:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 754843 00:27:21.272 01:35:12 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:27:21.272 01:35:12 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:27:21.272 01:35:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 754843' 00:27:21.272 killing process with pid 754843 00:27:21.272 01:35:12 -- common/autotest_common.sh@945 -- # kill 754843 00:27:21.272 01:35:12 -- common/autotest_common.sh@950 -- # wait 754843 00:27:21.841 01:35:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:21.841 01:35:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:21.841 01:35:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:21.841 01:35:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:21.841 01:35:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:21.841 01:35:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.841 01:35:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.841 01:35:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.743 01:35:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:23.743 00:27:23.743 real 0m16.366s 00:27:23.743 user 0m46.466s 00:27:23.743 sys 0m7.658s 00:27:23.743 01:35:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:23.743 01:35:15 -- common/autotest_common.sh@10 -- # set +x 00:27:23.743 ************************************ 00:27:23.743 END TEST nvmf_target_disconnect 00:27:23.743 ************************************ 00:27:23.743 01:35:15 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:27:23.743 01:35:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:23.743 01:35:15 -- common/autotest_common.sh@10 -- # set +x 00:27:23.743 01:35:15 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:27:23.743 00:27:23.743 real 21m3.903s 00:27:23.743 user 60m26.979s 00:27:23.743 sys 5m8.419s 00:27:23.743 01:35:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:23.743 01:35:15 -- common/autotest_common.sh@10 -- # set +x 00:27:23.743 ************************************ 00:27:23.743 END TEST nvmf_tcp 00:27:23.743 ************************************ 00:27:23.743 01:35:15 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:27:23.743 01:35:15 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:23.743 01:35:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:23.743 01:35:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:23.743 01:35:15 -- common/autotest_common.sh@10 -- # set +x 00:27:23.743 ************************************ 00:27:23.743 START TEST spdkcli_nvmf_tcp 00:27:23.743 ************************************ 00:27:23.743 01:35:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:23.743 * Looking for test storage... 00:27:23.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:23.743 01:35:15 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:23.743 01:35:15 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:23.743 01:35:15 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:23.743 01:35:15 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.743 01:35:15 -- nvmf/common.sh@7 -- # uname -s 00:27:23.743 01:35:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.743 01:35:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.743 01:35:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.743 01:35:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.743 01:35:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.743 01:35:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.743 01:35:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.743 01:35:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.743 01:35:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.743 01:35:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.743 01:35:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:23.743 01:35:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:23.743 01:35:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.743 01:35:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.743 01:35:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.743 01:35:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.743 01:35:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.743 01:35:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.743 01:35:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.743 01:35:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.743 01:35:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.743 01:35:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.743 01:35:15 -- paths/export.sh@5 -- # export PATH 00:27:23.743 01:35:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.743 01:35:15 -- nvmf/common.sh@46 -- # : 0 00:27:23.743 01:35:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:23.743 01:35:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:23.743 01:35:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:23.743 01:35:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.743 01:35:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.743 01:35:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:23.743 01:35:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:23.743 01:35:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:23.743 01:35:15 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:23.743 01:35:15 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:23.743 01:35:15 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:23.744 01:35:15 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:23.744 01:35:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:23.744 01:35:15 -- common/autotest_common.sh@10 -- # set +x 00:27:23.744 01:35:15 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:23.744 01:35:15 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=756061 00:27:23.744 01:35:15 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:23.744 01:35:15 -- spdkcli/common.sh@34 -- # waitforlisten 756061 00:27:23.744 01:35:15 -- common/autotest_common.sh@819 -- # '[' -z 756061 ']' 00:27:23.744 01:35:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.744 01:35:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:23.744 01:35:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.744 01:35:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:23.744 01:35:15 -- common/autotest_common.sh@10 -- # set +x 00:27:24.004 [2024-07-27 01:35:15.514129] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:24.004 [2024-07-27 01:35:15.514214] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756061 ] 00:27:24.004 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.004 [2024-07-27 01:35:15.580554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:24.004 [2024-07-27 01:35:15.691787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:24.004 [2024-07-27 01:35:15.693083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.004 [2024-07-27 01:35:15.693103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.939 01:35:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:24.939 01:35:16 -- common/autotest_common.sh@852 -- # return 0 00:27:24.939 01:35:16 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:24.939 01:35:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:24.939 01:35:16 -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 01:35:16 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:24.939 01:35:16 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:24.939 01:35:16 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:24.939 01:35:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:24.939 01:35:16 -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 01:35:16 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:24.939 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:24.939 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:24.939 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:24.939 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:24.939 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:24.939 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:24.939 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:24.939 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:24.939 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:24.939 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:24.939 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:24.939 ' 00:27:25.199 [2024-07-27 01:35:16.928041] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:27.734 [2024-07-27 01:35:19.098567] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.670 [2024-07-27 01:35:20.339031] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:31.204 [2024-07-27 01:35:22.626238] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:33.105 [2024-07-27 01:35:24.600679] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:34.477 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:34.477 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:34.477 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:34.477 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:34.477 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:34.477 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:34.477 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:34.477 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:34.477 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:34.477 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:34.477 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:34.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:34.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:34.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:34.478 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:34.478 01:35:26 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:34.478 01:35:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:34.478 01:35:26 -- common/autotest_common.sh@10 -- # set +x 00:27:34.794 01:35:26 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:34.794 01:35:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:34.794 01:35:26 -- common/autotest_common.sh@10 -- # set +x 00:27:34.794 01:35:26 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:34.794 01:35:26 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:35.096 01:35:26 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:35.096 01:35:26 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:35.096 01:35:26 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:35.096 01:35:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:35.096 01:35:26 -- common/autotest_common.sh@10 -- # set +x 00:27:35.096 01:35:26 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:35.096 01:35:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:35.096 01:35:26 -- common/autotest_common.sh@10 -- # set +x 00:27:35.096 01:35:26 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:35.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:35.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:35.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:35.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:35.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:35.096 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:35.096 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:35.096 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:35.096 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:35.096 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:35.096 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:35.096 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:35.096 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:35.096 ' 00:27:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:40.366 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:40.366 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:40.366 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:40.366 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:40.366 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:40.366 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:40.366 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:40.366 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:40.366 01:35:32 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:40.366 01:35:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:40.366 01:35:32 -- common/autotest_common.sh@10 -- # set +x 00:27:40.366 01:35:32 -- spdkcli/nvmf.sh@90 -- # killprocess 756061 00:27:40.366 01:35:32 -- common/autotest_common.sh@926 -- # '[' -z 756061 ']' 00:27:40.366 01:35:32 -- common/autotest_common.sh@930 -- # kill -0 756061 00:27:40.366 01:35:32 -- common/autotest_common.sh@931 -- # uname 00:27:40.366 01:35:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:40.366 01:35:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 756061 00:27:40.366 01:35:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:40.366 01:35:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:40.366 01:35:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 756061' 00:27:40.366 killing process with pid 756061 00:27:40.366 01:35:32 -- common/autotest_common.sh@945 -- # kill 756061 00:27:40.366 [2024-07-27 01:35:32.071630] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:40.366 01:35:32 -- common/autotest_common.sh@950 -- # wait 756061 00:27:40.625 01:35:32 -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:40.625 01:35:32 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:40.625 01:35:32 -- spdkcli/common.sh@13 -- # '[' -n 756061 ']' 00:27:40.625 01:35:32 -- spdkcli/common.sh@14 -- # killprocess 756061 00:27:40.625 01:35:32 -- common/autotest_common.sh@926 -- # '[' -z 756061 ']' 00:27:40.625 01:35:32 -- common/autotest_common.sh@930 -- # kill -0 756061 00:27:40.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (756061) - No such process 00:27:40.625 01:35:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 756061 is not found' 00:27:40.625 Process with pid 756061 is not found 00:27:40.625 01:35:32 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:40.625 01:35:32 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:40.625 01:35:32 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:40.625 00:27:40.625 real 0m16.945s 00:27:40.625 user 0m35.973s 00:27:40.625 sys 0m0.880s 00:27:40.625 01:35:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.625 01:35:32 -- common/autotest_common.sh@10 -- # set +x 00:27:40.625 ************************************ 00:27:40.625 END TEST spdkcli_nvmf_tcp 00:27:40.625 ************************************ 00:27:40.625 01:35:32 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:40.625 01:35:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:40.625 01:35:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:40.625 01:35:32 -- common/autotest_common.sh@10 -- # set +x 00:27:40.625 ************************************ 00:27:40.625 START TEST nvmf_identify_passthru 00:27:40.625 ************************************ 00:27:40.625 01:35:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:40.884 * Looking for test storage... 00:27:40.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:40.884 01:35:32 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.884 01:35:32 -- nvmf/common.sh@7 -- # uname -s 00:27:40.884 01:35:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.884 01:35:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.884 01:35:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.884 01:35:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.884 01:35:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.884 01:35:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.884 01:35:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.884 01:35:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.884 01:35:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.884 01:35:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.884 01:35:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:40.884 01:35:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:40.884 01:35:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.884 01:35:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.884 01:35:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.884 01:35:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.884 01:35:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.884 01:35:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.884 01:35:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.884 01:35:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.884 01:35:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.884 01:35:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.884 01:35:32 -- paths/export.sh@5 -- # export PATH 00:27:40.884 01:35:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.884 01:35:32 -- nvmf/common.sh@46 -- # : 0 00:27:40.884 01:35:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:40.884 01:35:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:40.884 01:35:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:40.884 01:35:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.884 01:35:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.884 01:35:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:40.884 01:35:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:40.884 01:35:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:40.884 01:35:32 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.884 01:35:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.884 01:35:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.884 01:35:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.885 01:35:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.885 01:35:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.885 01:35:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.885 01:35:32 -- paths/export.sh@5 -- # export PATH 00:27:40.885 01:35:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.885 01:35:32 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:40.885 01:35:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:40.885 01:35:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.885 01:35:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:40.885 01:35:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:40.885 01:35:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:40.885 01:35:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.885 01:35:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:40.885 01:35:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.885 01:35:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:40.885 01:35:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:40.885 01:35:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:40.885 01:35:32 -- common/autotest_common.sh@10 -- # set +x 00:27:42.786 01:35:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:42.786 01:35:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:42.786 01:35:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:42.786 01:35:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:42.786 01:35:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:42.786 01:35:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:42.786 01:35:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:42.786 01:35:34 -- nvmf/common.sh@294 -- # net_devs=() 00:27:42.786 01:35:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:42.786 01:35:34 -- nvmf/common.sh@295 -- # e810=() 00:27:42.786 01:35:34 -- nvmf/common.sh@295 -- # local -ga e810 00:27:42.786 01:35:34 -- nvmf/common.sh@296 -- # x722=() 00:27:42.786 01:35:34 -- nvmf/common.sh@296 -- # local -ga x722 00:27:42.786 01:35:34 -- nvmf/common.sh@297 -- # mlx=() 00:27:42.786 01:35:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:42.786 01:35:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.786 01:35:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:42.786 01:35:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:42.786 01:35:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:42.786 01:35:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:42.786 01:35:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:42.786 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:42.786 01:35:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:42.786 01:35:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:42.786 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:42.786 01:35:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:42.786 01:35:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:42.786 01:35:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.786 01:35:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:42.786 01:35:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.786 01:35:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:42.786 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:42.786 01:35:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.786 01:35:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:42.786 01:35:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.786 01:35:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:42.786 01:35:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.786 01:35:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:42.786 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:42.786 01:35:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.786 01:35:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:42.786 01:35:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:42.786 01:35:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:42.786 01:35:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.786 01:35:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.786 01:35:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.786 01:35:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:42.786 01:35:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.786 01:35:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.786 01:35:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:42.786 01:35:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.786 01:35:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.786 01:35:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:42.786 01:35:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:42.786 01:35:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.786 01:35:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.786 01:35:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.786 01:35:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.786 01:35:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:42.786 01:35:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.786 01:35:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.786 01:35:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.786 01:35:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:42.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:27:42.786 00:27:42.786 --- 10.0.0.2 ping statistics --- 00:27:42.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.786 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:42.786 01:35:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:27:42.786 00:27:42.786 --- 10.0.0.1 ping statistics --- 00:27:42.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.786 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:42.786 01:35:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.786 01:35:34 -- nvmf/common.sh@410 -- # return 0 00:27:42.786 01:35:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:42.786 01:35:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.786 01:35:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:42.786 01:35:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.786 01:35:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:42.786 01:35:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:42.786 01:35:34 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:42.786 01:35:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:42.787 01:35:34 -- common/autotest_common.sh@10 -- # set +x 00:27:42.787 01:35:34 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:42.787 01:35:34 -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:42.787 01:35:34 -- common/autotest_common.sh@1509 -- # local bdfs 00:27:42.787 01:35:34 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:42.787 01:35:34 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:42.787 01:35:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:42.787 01:35:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:42.787 01:35:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:42.787 01:35:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:42.787 01:35:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:43.044 01:35:34 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:43.044 01:35:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:43.044 01:35:34 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:27:43.044 01:35:34 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:27:43.044 01:35:34 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:27:43.044 01:35:34 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:43.044 01:35:34 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:43.044 01:35:34 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:43.044 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.233 01:35:38 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:27:47.233 01:35:38 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:47.233 01:35:38 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:47.233 01:35:38 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:47.233 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.428 01:35:43 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:51.428 01:35:43 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:51.428 01:35:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:51.428 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.428 01:35:43 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:51.428 01:35:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:51.428 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.428 01:35:43 -- target/identify_passthru.sh@31 -- # nvmfpid=760792 00:27:51.428 01:35:43 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:51.428 01:35:43 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:51.428 01:35:43 -- target/identify_passthru.sh@35 -- # waitforlisten 760792 00:27:51.428 01:35:43 -- common/autotest_common.sh@819 -- # '[' -z 760792 ']' 00:27:51.428 01:35:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.428 01:35:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:51.428 01:35:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.428 01:35:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:51.428 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.428 [2024-07-27 01:35:43.080160] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:51.428 [2024-07-27 01:35:43.080247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.428 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.428 [2024-07-27 01:35:43.144184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.687 [2024-07-27 01:35:43.250551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:51.687 [2024-07-27 01:35:43.250709] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.687 [2024-07-27 01:35:43.250726] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.687 [2024-07-27 01:35:43.250739] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.687 [2024-07-27 01:35:43.250789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.687 [2024-07-27 01:35:43.250850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.687 [2024-07-27 01:35:43.250916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.687 [2024-07-27 01:35:43.250919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.687 01:35:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:51.687 01:35:43 -- common/autotest_common.sh@852 -- # return 0 00:27:51.687 01:35:43 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:51.687 01:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.687 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.687 INFO: Log level set to 20 00:27:51.687 INFO: Requests: 00:27:51.687 { 00:27:51.687 "jsonrpc": "2.0", 00:27:51.687 "method": "nvmf_set_config", 00:27:51.687 "id": 1, 00:27:51.687 "params": { 00:27:51.687 "admin_cmd_passthru": { 00:27:51.687 "identify_ctrlr": true 00:27:51.687 } 00:27:51.687 } 00:27:51.687 } 00:27:51.687 00:27:51.687 INFO: response: 00:27:51.687 { 00:27:51.687 "jsonrpc": "2.0", 00:27:51.687 "id": 1, 00:27:51.687 "result": true 00:27:51.687 } 00:27:51.687 00:27:51.687 01:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.687 01:35:43 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:51.687 01:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.687 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.687 INFO: Setting log level to 20 00:27:51.687 INFO: Setting log level to 20 00:27:51.687 INFO: Log level set to 20 00:27:51.687 INFO: Log level set to 20 00:27:51.687 INFO: Requests: 00:27:51.687 { 00:27:51.687 "jsonrpc": "2.0", 00:27:51.687 "method": "framework_start_init", 00:27:51.687 "id": 1 00:27:51.687 } 00:27:51.687 00:27:51.687 INFO: Requests: 00:27:51.687 { 00:27:51.687 "jsonrpc": "2.0", 00:27:51.687 "method": "framework_start_init", 00:27:51.687 "id": 1 00:27:51.687 } 00:27:51.687 00:27:51.687 [2024-07-27 01:35:43.400420] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:51.687 INFO: response: 00:27:51.687 { 00:27:51.687 "jsonrpc": "2.0", 00:27:51.687 "id": 1, 00:27:51.687 "result": true 00:27:51.687 } 00:27:51.687 00:27:51.687 INFO: response: 00:27:51.687 { 00:27:51.687 "jsonrpc": "2.0", 00:27:51.687 "id": 1, 00:27:51.687 "result": true 00:27:51.687 } 00:27:51.687 00:27:51.687 01:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.687 01:35:43 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:51.687 01:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.688 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.688 INFO: Setting log level to 40 00:27:51.688 INFO: Setting log level to 40 00:27:51.688 INFO: Setting log level to 40 00:27:51.688 [2024-07-27 01:35:43.410501] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.688 01:35:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.688 01:35:43 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:51.688 01:35:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:51.688 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.688 01:35:43 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:27:51.688 01:35:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.688 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:27:54.975 Nvme0n1 00:27:54.975 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.975 01:35:46 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:54.975 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.975 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:27:54.975 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.975 01:35:46 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:54.975 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.975 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:27:54.975 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.975 01:35:46 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.975 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.975 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:27:54.975 [2024-07-27 01:35:46.311131] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.975 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.975 01:35:46 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:54.975 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.975 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:27:54.975 [2024-07-27 01:35:46.318847] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:54.975 [ 00:27:54.975 { 00:27:54.975 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:54.975 "subtype": "Discovery", 00:27:54.975 "listen_addresses": [], 00:27:54.975 "allow_any_host": true, 00:27:54.975 "hosts": [] 00:27:54.975 }, 00:27:54.975 { 00:27:54.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.975 "subtype": "NVMe", 00:27:54.975 "listen_addresses": [ 00:27:54.975 { 00:27:54.975 "transport": "TCP", 00:27:54.975 "trtype": "TCP", 00:27:54.975 "adrfam": "IPv4", 00:27:54.975 "traddr": "10.0.0.2", 00:27:54.975 "trsvcid": "4420" 00:27:54.975 } 00:27:54.975 ], 00:27:54.975 "allow_any_host": true, 00:27:54.975 "hosts": [], 00:27:54.975 "serial_number": "SPDK00000000000001", 00:27:54.975 "model_number": "SPDK bdev Controller", 00:27:54.975 "max_namespaces": 1, 00:27:54.975 "min_cntlid": 1, 00:27:54.975 "max_cntlid": 65519, 00:27:54.975 "namespaces": [ 00:27:54.975 { 00:27:54.975 "nsid": 1, 00:27:54.975 "bdev_name": "Nvme0n1", 00:27:54.975 "name": "Nvme0n1", 00:27:54.975 "nguid": "FF0602714D234FEE9A9E0DC8CB63FFE5", 00:27:54.975 "uuid": "ff060271-4d23-4fee-9a9e-0dc8cb63ffe5" 00:27:54.975 } 00:27:54.975 ] 00:27:54.975 } 00:27:54.975 ] 00:27:54.975 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.975 01:35:46 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:54.975 01:35:46 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:54.975 01:35:46 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:54.975 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.975 01:35:46 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:27:54.975 01:35:46 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:54.975 01:35:46 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:54.975 01:35:46 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:54.975 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.234 01:35:46 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:55.234 01:35:46 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:27:55.234 01:35:46 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:55.234 01:35:46 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.234 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.234 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:27:55.234 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.234 01:35:46 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:55.234 01:35:46 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:55.234 01:35:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:55.234 01:35:46 -- nvmf/common.sh@116 -- # sync 00:27:55.234 01:35:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:55.234 01:35:46 -- nvmf/common.sh@119 -- # set +e 00:27:55.234 01:35:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:55.234 01:35:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:55.234 rmmod nvme_tcp 00:27:55.234 rmmod nvme_fabrics 00:27:55.234 rmmod nvme_keyring 00:27:55.234 01:35:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:55.234 01:35:46 -- nvmf/common.sh@123 -- # set -e 00:27:55.234 01:35:46 -- nvmf/common.sh@124 -- # return 0 00:27:55.234 01:35:46 -- nvmf/common.sh@477 -- # '[' -n 760792 ']' 00:27:55.234 01:35:46 -- nvmf/common.sh@478 -- # killprocess 760792 00:27:55.235 01:35:46 -- common/autotest_common.sh@926 -- # '[' -z 760792 ']' 00:27:55.235 01:35:46 -- common/autotest_common.sh@930 -- # kill -0 760792 00:27:55.235 01:35:46 -- common/autotest_common.sh@931 -- # uname 00:27:55.235 01:35:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:55.235 01:35:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 760792 00:27:55.235 01:35:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:55.235 01:35:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:55.235 01:35:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 760792' 00:27:55.235 killing process with pid 760792 00:27:55.235 01:35:46 -- common/autotest_common.sh@945 -- # kill 760792 00:27:55.235 [2024-07-27 01:35:46.928416] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:55.235 01:35:46 -- common/autotest_common.sh@950 -- # wait 760792 00:27:57.140 01:35:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:57.140 01:35:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:57.140 01:35:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:57.140 01:35:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:57.140 01:35:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:57.140 01:35:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.140 01:35:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:57.140 01:35:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.048 01:35:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:59.048 00:27:59.048 real 0m18.169s 00:27:59.048 user 0m27.344s 00:27:59.048 sys 0m2.291s 00:27:59.048 01:35:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.048 01:35:50 -- common/autotest_common.sh@10 -- # set +x 00:27:59.048 ************************************ 00:27:59.048 END TEST nvmf_identify_passthru 00:27:59.048 ************************************ 00:27:59.048 01:35:50 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:59.048 01:35:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:59.048 01:35:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:59.048 01:35:50 -- common/autotest_common.sh@10 -- # set +x 00:27:59.048 ************************************ 00:27:59.048 START TEST nvmf_dif 00:27:59.048 ************************************ 00:27:59.048 01:35:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:59.048 * Looking for test storage... 00:27:59.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:59.048 01:35:50 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.048 01:35:50 -- nvmf/common.sh@7 -- # uname -s 00:27:59.049 01:35:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.049 01:35:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.049 01:35:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.049 01:35:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.049 01:35:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.049 01:35:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.049 01:35:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.049 01:35:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.049 01:35:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.049 01:35:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.049 01:35:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:59.049 01:35:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:59.049 01:35:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.049 01:35:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.049 01:35:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.049 01:35:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.049 01:35:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.049 01:35:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.049 01:35:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.049 01:35:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.049 01:35:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.049 01:35:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.049 01:35:50 -- paths/export.sh@5 -- # export PATH 00:27:59.049 01:35:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.049 01:35:50 -- nvmf/common.sh@46 -- # : 0 00:27:59.049 01:35:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:59.049 01:35:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:59.049 01:35:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:59.049 01:35:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.049 01:35:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.049 01:35:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:59.049 01:35:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:59.049 01:35:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:59.049 01:35:50 -- target/dif.sh@15 -- # NULL_META=16 00:27:59.049 01:35:50 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:59.049 01:35:50 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:59.049 01:35:50 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:59.049 01:35:50 -- target/dif.sh@135 -- # nvmftestinit 00:27:59.049 01:35:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:59.049 01:35:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.049 01:35:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:59.049 01:35:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:59.049 01:35:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:59.049 01:35:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.049 01:35:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:59.049 01:35:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.049 01:35:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:59.049 01:35:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:59.049 01:35:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:59.049 01:35:50 -- common/autotest_common.sh@10 -- # set +x 00:28:00.954 01:35:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:00.954 01:35:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:00.954 01:35:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:00.954 01:35:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:00.954 01:35:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:00.954 01:35:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:00.954 01:35:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:00.954 01:35:52 -- nvmf/common.sh@294 -- # net_devs=() 00:28:00.954 01:35:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:00.954 01:35:52 -- nvmf/common.sh@295 -- # e810=() 00:28:00.954 01:35:52 -- nvmf/common.sh@295 -- # local -ga e810 00:28:00.954 01:35:52 -- nvmf/common.sh@296 -- # x722=() 00:28:00.954 01:35:52 -- nvmf/common.sh@296 -- # local -ga x722 00:28:00.954 01:35:52 -- nvmf/common.sh@297 -- # mlx=() 00:28:00.954 01:35:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:00.954 01:35:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.954 01:35:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:00.954 01:35:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:00.954 01:35:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:00.954 01:35:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:00.954 01:35:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:00.954 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:00.954 01:35:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:00.954 01:35:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:00.954 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:00.954 01:35:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:00.954 01:35:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:00.954 01:35:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.954 01:35:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:00.954 01:35:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.954 01:35:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:00.954 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:00.954 01:35:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.954 01:35:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:00.954 01:35:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.954 01:35:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:00.954 01:35:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.954 01:35:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:00.954 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:00.954 01:35:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.954 01:35:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:00.954 01:35:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:00.954 01:35:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:00.954 01:35:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:00.954 01:35:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.954 01:35:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.954 01:35:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.954 01:35:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:00.954 01:35:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.954 01:35:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.954 01:35:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:00.954 01:35:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.954 01:35:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.954 01:35:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:00.954 01:35:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:00.954 01:35:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.954 01:35:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.954 01:35:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.954 01:35:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.954 01:35:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:00.954 01:35:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.954 01:35:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.954 01:35:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.954 01:35:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:00.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:28:00.954 00:28:00.954 --- 10.0.0.2 ping statistics --- 00:28:00.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.954 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:00.954 01:35:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:28:00.954 00:28:00.954 --- 10.0.0.1 ping statistics --- 00:28:00.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.954 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:28:00.954 01:35:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.954 01:35:52 -- nvmf/common.sh@410 -- # return 0 00:28:00.954 01:35:52 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:28:00.954 01:35:52 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:01.893 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:01.893 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:01.893 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:01.893 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:01.893 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:01.893 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:01.893 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:01.893 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:01.893 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:01.893 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:01.893 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:01.893 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:01.893 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:01.893 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:01.893 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:01.893 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:01.893 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:02.151 01:35:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.151 01:35:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:02.151 01:35:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:02.151 01:35:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.151 01:35:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:02.151 01:35:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:02.151 01:35:53 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:02.151 01:35:53 -- target/dif.sh@137 -- # nvmfappstart 00:28:02.151 01:35:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:02.151 01:35:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:02.151 01:35:53 -- common/autotest_common.sh@10 -- # set +x 00:28:02.151 01:35:53 -- nvmf/common.sh@469 -- # nvmfpid=764113 00:28:02.151 01:35:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:02.151 01:35:53 -- nvmf/common.sh@470 -- # waitforlisten 764113 00:28:02.151 01:35:53 -- common/autotest_common.sh@819 -- # '[' -z 764113 ']' 00:28:02.151 01:35:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.151 01:35:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:02.151 01:35:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.151 01:35:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:02.151 01:35:53 -- common/autotest_common.sh@10 -- # set +x 00:28:02.151 [2024-07-27 01:35:53.806192] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:02.151 [2024-07-27 01:35:53.806262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.151 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.151 [2024-07-27 01:35:53.869433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.409 [2024-07-27 01:35:53.973992] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:02.409 [2024-07-27 01:35:53.974172] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.409 [2024-07-27 01:35:53.974192] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.409 [2024-07-27 01:35:53.974205] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.409 [2024-07-27 01:35:53.974234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.377 01:35:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:03.377 01:35:54 -- common/autotest_common.sh@852 -- # return 0 00:28:03.377 01:35:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:03.377 01:35:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:03.377 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.377 01:35:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.377 01:35:54 -- target/dif.sh@139 -- # create_transport 00:28:03.377 01:35:54 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:03.377 01:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.377 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.377 [2024-07-27 01:35:54.784014] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.377 01:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.377 01:35:54 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:03.377 01:35:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:03.377 01:35:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:03.377 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.377 ************************************ 00:28:03.377 START TEST fio_dif_1_default 00:28:03.377 ************************************ 00:28:03.377 01:35:54 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:28:03.377 01:35:54 -- target/dif.sh@86 -- # create_subsystems 0 00:28:03.377 01:35:54 -- target/dif.sh@28 -- # local sub 00:28:03.377 01:35:54 -- target/dif.sh@30 -- # for sub in "$@" 00:28:03.377 01:35:54 -- target/dif.sh@31 -- # create_subsystem 0 00:28:03.377 01:35:54 -- target/dif.sh@18 -- # local sub_id=0 00:28:03.377 01:35:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:03.377 01:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.378 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.378 bdev_null0 00:28:03.378 01:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.378 01:35:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:03.378 01:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.378 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.378 01:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.378 01:35:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:03.378 01:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.378 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.378 01:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.378 01:35:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:03.378 01:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.378 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:28:03.378 [2024-07-27 01:35:54.820298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.378 01:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.378 01:35:54 -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:03.378 01:35:54 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:03.378 01:35:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:03.378 01:35:54 -- nvmf/common.sh@520 -- # config=() 00:28:03.378 01:35:54 -- nvmf/common.sh@520 -- # local subsystem config 00:28:03.378 01:35:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:03.378 01:35:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.378 01:35:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:03.378 { 00:28:03.378 "params": { 00:28:03.378 "name": "Nvme$subsystem", 00:28:03.378 "trtype": "$TEST_TRANSPORT", 00:28:03.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.378 "adrfam": "ipv4", 00:28:03.378 "trsvcid": "$NVMF_PORT", 00:28:03.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.378 "hdgst": ${hdgst:-false}, 00:28:03.378 "ddgst": ${ddgst:-false} 00:28:03.378 }, 00:28:03.378 "method": "bdev_nvme_attach_controller" 00:28:03.378 } 00:28:03.378 EOF 00:28:03.378 )") 00:28:03.378 01:35:54 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.378 01:35:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:03.378 01:35:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:03.378 01:35:54 -- target/dif.sh@82 -- # gen_fio_conf 00:28:03.378 01:35:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:03.378 01:35:54 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.378 01:35:54 -- common/autotest_common.sh@1320 -- # shift 00:28:03.378 01:35:54 -- target/dif.sh@54 -- # local file 00:28:03.378 01:35:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:03.378 01:35:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:03.378 01:35:54 -- target/dif.sh@56 -- # cat 00:28:03.378 01:35:54 -- nvmf/common.sh@542 -- # cat 00:28:03.378 01:35:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.378 01:35:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:03.378 01:35:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:03.378 01:35:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:03.378 01:35:54 -- target/dif.sh@72 -- # (( file <= files )) 00:28:03.378 01:35:54 -- nvmf/common.sh@544 -- # jq . 00:28:03.378 01:35:54 -- nvmf/common.sh@545 -- # IFS=, 00:28:03.378 01:35:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:03.378 "params": { 00:28:03.378 "name": "Nvme0", 00:28:03.378 "trtype": "tcp", 00:28:03.378 "traddr": "10.0.0.2", 00:28:03.378 "adrfam": "ipv4", 00:28:03.378 "trsvcid": "4420", 00:28:03.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:03.378 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:03.378 "hdgst": false, 00:28:03.378 "ddgst": false 00:28:03.378 }, 00:28:03.378 "method": "bdev_nvme_attach_controller" 00:28:03.378 }' 00:28:03.378 01:35:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:03.378 01:35:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:03.378 01:35:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:03.378 01:35:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.378 01:35:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:03.378 01:35:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:03.378 01:35:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:03.378 01:35:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:03.378 01:35:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:03.378 01:35:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.378 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:03.378 fio-3.35 00:28:03.378 Starting 1 thread 00:28:03.378 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.957 [2024-07-27 01:35:55.507849] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:03.957 [2024-07-27 01:35:55.507939] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:13.920 00:28:13.920 filename0: (groupid=0, jobs=1): err= 0: pid=764348: Sat Jul 27 01:36:05 2024 00:28:13.920 read: IOPS=187, BW=749KiB/s (767kB/s)(7488KiB/10002msec) 00:28:13.920 slat (nsec): min=6584, max=78429, avg=8814.05, stdev=4074.76 00:28:13.920 clat (usec): min=857, max=46387, avg=21343.20, stdev=20290.21 00:28:13.920 lat (usec): min=864, max=46421, avg=21352.02, stdev=20290.37 00:28:13.920 clat percentiles (usec): 00:28:13.920 | 1.00th=[ 881], 5.00th=[ 922], 10.00th=[ 938], 20.00th=[ 963], 00:28:13.920 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[41157], 60.00th=[41157], 00:28:13.920 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:28:13.920 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:28:13.920 | 99.99th=[46400] 00:28:13.920 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=749.47, stdev=30.76, samples=19 00:28:13.920 iops : min= 168, max= 192, avg=187.37, stdev= 7.69, samples=19 00:28:13.920 lat (usec) : 1000=35.04% 00:28:13.920 lat (msec) : 2=14.74%, 50=50.21% 00:28:13.920 cpu : usr=90.23%, sys=9.48%, ctx=19, majf=0, minf=284 00:28:13.920 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.920 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.920 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:13.920 00:28:13.920 Run status group 0 (all jobs): 00:28:13.920 READ: bw=749KiB/s (767kB/s), 749KiB/s-749KiB/s (767kB/s-767kB/s), io=7488KiB (7668kB), run=10002-10002msec 00:28:14.483 01:36:05 -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:14.483 01:36:05 -- target/dif.sh@43 -- # local sub 00:28:14.483 01:36:05 -- target/dif.sh@45 -- # for sub in "$@" 00:28:14.483 01:36:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:14.483 01:36:05 -- target/dif.sh@36 -- # local sub_id=0 00:28:14.483 01:36:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:14.483 01:36:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.483 01:36:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.483 01:36:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.483 01:36:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:14.483 01:36:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.483 01:36:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.483 01:36:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.483 00:28:14.483 real 0m11.162s 00:28:14.483 user 0m10.222s 00:28:14.483 sys 0m1.213s 00:28:14.483 01:36:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.483 01:36:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.483 ************************************ 00:28:14.484 END TEST fio_dif_1_default 00:28:14.484 ************************************ 00:28:14.484 01:36:05 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:14.484 01:36:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:14.484 01:36:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:14.484 01:36:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.484 ************************************ 00:28:14.484 START TEST fio_dif_1_multi_subsystems 00:28:14.484 ************************************ 00:28:14.484 01:36:05 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:28:14.484 01:36:05 -- target/dif.sh@92 -- # local files=1 00:28:14.484 01:36:05 -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:14.484 01:36:05 -- target/dif.sh@28 -- # local sub 00:28:14.484 01:36:05 -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.484 01:36:05 -- target/dif.sh@31 -- # create_subsystem 0 00:28:14.484 01:36:05 -- target/dif.sh@18 -- # local sub_id=0 00:28:14.484 01:36:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:14.484 01:36:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.484 01:36:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.484 bdev_null0 00:28:14.484 01:36:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.484 01:36:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:14.484 01:36:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.484 01:36:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.484 01:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.484 01:36:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:14.484 01:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.484 01:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:14.484 01:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.484 01:36:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:14.484 01:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.484 01:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:14.484 [2024-07-27 01:36:06.014466] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.484 01:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.484 01:36:06 -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.484 01:36:06 -- target/dif.sh@31 -- # create_subsystem 1 00:28:14.484 01:36:06 -- target/dif.sh@18 -- # local sub_id=1 00:28:14.484 01:36:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:14.484 01:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.484 01:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:14.484 bdev_null1 00:28:14.484 01:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.484 01:36:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:14.484 01:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.484 01:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:14.484 01:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.484 01:36:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:14.484 01:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.484 01:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:14.484 01:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.484 01:36:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.484 01:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.484 01:36:06 -- common/autotest_common.sh@10 -- # set +x 00:28:14.484 01:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.484 01:36:06 -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:14.484 01:36:06 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:14.484 01:36:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:14.484 01:36:06 -- nvmf/common.sh@520 -- # config=() 00:28:14.484 01:36:06 -- nvmf/common.sh@520 -- # local subsystem config 00:28:14.484 01:36:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:14.484 01:36:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:14.484 { 00:28:14.484 "params": { 00:28:14.484 "name": "Nvme$subsystem", 00:28:14.484 "trtype": "$TEST_TRANSPORT", 00:28:14.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.484 "adrfam": "ipv4", 00:28:14.484 "trsvcid": "$NVMF_PORT", 00:28:14.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.484 "hdgst": ${hdgst:-false}, 00:28:14.484 "ddgst": ${ddgst:-false} 00:28:14.484 }, 00:28:14.484 "method": "bdev_nvme_attach_controller" 00:28:14.484 } 00:28:14.484 EOF 00:28:14.484 )") 00:28:14.484 01:36:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.484 01:36:06 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.484 01:36:06 -- target/dif.sh@82 -- # gen_fio_conf 00:28:14.484 01:36:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:14.484 01:36:06 -- target/dif.sh@54 -- # local file 00:28:14.484 01:36:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:14.484 01:36:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:14.484 01:36:06 -- target/dif.sh@56 -- # cat 00:28:14.484 01:36:06 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:14.484 01:36:06 -- common/autotest_common.sh@1320 -- # shift 00:28:14.484 01:36:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:14.484 01:36:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.484 01:36:06 -- nvmf/common.sh@542 -- # cat 00:28:14.484 01:36:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:14.484 01:36:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:14.484 01:36:06 -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.484 01:36:06 -- target/dif.sh@73 -- # cat 00:28:14.484 01:36:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:14.484 01:36:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:14.484 01:36:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:14.484 01:36:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:14.484 { 00:28:14.484 "params": { 00:28:14.484 "name": "Nvme$subsystem", 00:28:14.484 "trtype": "$TEST_TRANSPORT", 00:28:14.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.484 "adrfam": "ipv4", 00:28:14.484 "trsvcid": "$NVMF_PORT", 00:28:14.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.484 "hdgst": ${hdgst:-false}, 00:28:14.484 "ddgst": ${ddgst:-false} 00:28:14.484 }, 00:28:14.484 "method": "bdev_nvme_attach_controller" 00:28:14.484 } 00:28:14.484 EOF 00:28:14.484 )") 00:28:14.484 01:36:06 -- target/dif.sh@72 -- # (( file++ )) 00:28:14.484 01:36:06 -- nvmf/common.sh@542 -- # cat 00:28:14.484 01:36:06 -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.484 01:36:06 -- nvmf/common.sh@544 -- # jq . 00:28:14.484 01:36:06 -- nvmf/common.sh@545 -- # IFS=, 00:28:14.484 01:36:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:14.484 "params": { 00:28:14.484 "name": "Nvme0", 00:28:14.484 "trtype": "tcp", 00:28:14.484 "traddr": "10.0.0.2", 00:28:14.484 "adrfam": "ipv4", 00:28:14.484 "trsvcid": "4420", 00:28:14.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.484 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:14.484 "hdgst": false, 00:28:14.484 "ddgst": false 00:28:14.484 }, 00:28:14.484 "method": "bdev_nvme_attach_controller" 00:28:14.484 },{ 00:28:14.484 "params": { 00:28:14.484 "name": "Nvme1", 00:28:14.484 "trtype": "tcp", 00:28:14.484 "traddr": "10.0.0.2", 00:28:14.484 "adrfam": "ipv4", 00:28:14.484 "trsvcid": "4420", 00:28:14.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:14.484 "hdgst": false, 00:28:14.484 "ddgst": false 00:28:14.484 }, 00:28:14.484 "method": "bdev_nvme_attach_controller" 00:28:14.484 }' 00:28:14.484 01:36:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:14.484 01:36:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:14.484 01:36:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.484 01:36:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:14.484 01:36:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:14.484 01:36:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:14.484 01:36:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:14.484 01:36:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:14.484 01:36:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:14.484 01:36:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.742 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:14.742 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:14.742 fio-3.35 00:28:14.742 Starting 2 threads 00:28:14.742 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.308 [2024-07-27 01:36:06.856616] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:15.308 [2024-07-27 01:36:06.856708] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:25.269 00:28:25.269 filename0: (groupid=0, jobs=1): err= 0: pid=765907: Sat Jul 27 01:36:16 2024 00:28:25.269 read: IOPS=188, BW=753KiB/s (771kB/s)(7536KiB/10005msec) 00:28:25.269 slat (nsec): min=6895, max=64436, avg=8740.75, stdev=3162.41 00:28:25.269 clat (usec): min=849, max=42412, avg=21213.86, stdev=20248.97 00:28:25.269 lat (usec): min=856, max=42428, avg=21222.60, stdev=20248.62 00:28:25.269 clat percentiles (usec): 00:28:25.269 | 1.00th=[ 873], 5.00th=[ 889], 10.00th=[ 898], 20.00th=[ 914], 00:28:25.269 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[40633], 60.00th=[41157], 00:28:25.269 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:28:25.269 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:28:25.269 | 99.99th=[42206] 00:28:25.269 bw ( KiB/s): min= 704, max= 768, per=65.95%, avg=752.00, stdev=26.47, samples=20 00:28:25.269 iops : min= 176, max= 192, avg=188.00, stdev= 6.62, samples=20 00:28:25.269 lat (usec) : 1000=48.20% 00:28:25.269 lat (msec) : 2=1.70%, 50=50.11% 00:28:25.269 cpu : usr=94.72%, sys=4.98%, ctx=15, majf=0, minf=241 00:28:25.269 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.269 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.269 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:25.269 filename1: (groupid=0, jobs=1): err= 0: pid=765908: Sat Jul 27 01:36:16 2024 00:28:25.269 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10001msec) 00:28:25.269 slat (nsec): min=6855, max=29549, avg=8807.90, stdev=2902.03 00:28:25.269 clat (usec): min=40891, max=42992, avg=41296.95, stdev=477.86 00:28:25.269 lat (usec): min=40898, max=43005, avg=41305.76, stdev=478.20 00:28:25.269 clat percentiles (usec): 00:28:25.269 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:25.269 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:25.269 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:25.269 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:28:25.269 | 99.99th=[43254] 00:28:25.269 bw ( KiB/s): min= 384, max= 416, per=33.77%, avg=385.68, stdev= 7.34, samples=19 00:28:25.269 iops : min= 96, max= 104, avg=96.42, stdev= 1.84, samples=19 00:28:25.269 lat (msec) : 50=100.00% 00:28:25.269 cpu : usr=94.48%, sys=5.23%, ctx=13, majf=0, minf=62 00:28:25.269 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.269 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.269 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:25.269 00:28:25.269 Run status group 0 (all jobs): 00:28:25.269 READ: bw=1140KiB/s (1168kB/s), 387KiB/s-753KiB/s (396kB/s-771kB/s), io=11.1MiB (11.7MB), run=10001-10005msec 00:28:25.528 01:36:17 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:25.528 01:36:17 -- target/dif.sh@43 -- # local sub 00:28:25.528 01:36:17 -- target/dif.sh@45 -- # for sub in "$@" 00:28:25.528 01:36:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:25.528 01:36:17 -- target/dif.sh@36 -- # local sub_id=0 00:28:25.528 01:36:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:25.528 01:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 01:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.528 01:36:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:25.528 01:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 01:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.528 01:36:17 -- target/dif.sh@45 -- # for sub in "$@" 00:28:25.528 01:36:17 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:25.528 01:36:17 -- target/dif.sh@36 -- # local sub_id=1 00:28:25.528 01:36:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.528 01:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 01:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.528 01:36:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:25.528 01:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 01:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.528 00:28:25.528 real 0m11.240s 00:28:25.528 user 0m20.102s 00:28:25.528 sys 0m1.311s 00:28:25.528 01:36:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 ************************************ 00:28:25.528 END TEST fio_dif_1_multi_subsystems 00:28:25.528 ************************************ 00:28:25.528 01:36:17 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:25.528 01:36:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:25.528 01:36:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 ************************************ 00:28:25.528 START TEST fio_dif_rand_params 00:28:25.528 ************************************ 00:28:25.528 01:36:17 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:28:25.528 01:36:17 -- target/dif.sh@100 -- # local NULL_DIF 00:28:25.528 01:36:17 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:25.528 01:36:17 -- target/dif.sh@103 -- # NULL_DIF=3 00:28:25.528 01:36:17 -- target/dif.sh@103 -- # bs=128k 00:28:25.528 01:36:17 -- target/dif.sh@103 -- # numjobs=3 00:28:25.528 01:36:17 -- target/dif.sh@103 -- # iodepth=3 00:28:25.528 01:36:17 -- target/dif.sh@103 -- # runtime=5 00:28:25.528 01:36:17 -- target/dif.sh@105 -- # create_subsystems 0 00:28:25.528 01:36:17 -- target/dif.sh@28 -- # local sub 00:28:25.528 01:36:17 -- target/dif.sh@30 -- # for sub in "$@" 00:28:25.528 01:36:17 -- target/dif.sh@31 -- # create_subsystem 0 00:28:25.528 01:36:17 -- target/dif.sh@18 -- # local sub_id=0 00:28:25.528 01:36:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:25.528 01:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 bdev_null0 00:28:25.528 01:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.528 01:36:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:25.528 01:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 01:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.528 01:36:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:25.528 01:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 01:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.528 01:36:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:25.528 01:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.528 01:36:17 -- common/autotest_common.sh@10 -- # set +x 00:28:25.528 [2024-07-27 01:36:17.275995] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.528 01:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.528 01:36:17 -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:25.528 01:36:17 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:25.528 01:36:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:25.528 01:36:17 -- nvmf/common.sh@520 -- # config=() 00:28:25.528 01:36:17 -- nvmf/common.sh@520 -- # local subsystem config 00:28:25.528 01:36:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.528 01:36:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:25.528 01:36:17 -- target/dif.sh@82 -- # gen_fio_conf 00:28:25.528 01:36:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:25.528 { 00:28:25.528 "params": { 00:28:25.528 "name": "Nvme$subsystem", 00:28:25.528 "trtype": "$TEST_TRANSPORT", 00:28:25.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.528 "adrfam": "ipv4", 00:28:25.528 "trsvcid": "$NVMF_PORT", 00:28:25.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.528 "hdgst": ${hdgst:-false}, 00:28:25.528 "ddgst": ${ddgst:-false} 00:28:25.528 }, 00:28:25.528 "method": "bdev_nvme_attach_controller" 00:28:25.528 } 00:28:25.528 EOF 00:28:25.528 )") 00:28:25.528 01:36:17 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.528 01:36:17 -- target/dif.sh@54 -- # local file 00:28:25.528 01:36:17 -- target/dif.sh@56 -- # cat 00:28:25.528 01:36:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:25.528 01:36:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:25.528 01:36:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:25.528 01:36:17 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.528 01:36:17 -- common/autotest_common.sh@1320 -- # shift 00:28:25.528 01:36:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:25.528 01:36:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.528 01:36:17 -- nvmf/common.sh@542 -- # cat 00:28:25.528 01:36:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:25.528 01:36:17 -- target/dif.sh@72 -- # (( file <= files )) 00:28:25.528 01:36:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.528 01:36:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:25.528 01:36:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:25.788 01:36:17 -- nvmf/common.sh@544 -- # jq . 00:28:25.788 01:36:17 -- nvmf/common.sh@545 -- # IFS=, 00:28:25.788 01:36:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:25.788 "params": { 00:28:25.788 "name": "Nvme0", 00:28:25.788 "trtype": "tcp", 00:28:25.788 "traddr": "10.0.0.2", 00:28:25.788 "adrfam": "ipv4", 00:28:25.788 "trsvcid": "4420", 00:28:25.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:25.788 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:25.788 "hdgst": false, 00:28:25.788 "ddgst": false 00:28:25.788 }, 00:28:25.788 "method": "bdev_nvme_attach_controller" 00:28:25.788 }' 00:28:25.788 01:36:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:25.788 01:36:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:25.788 01:36:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.788 01:36:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.788 01:36:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:25.788 01:36:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:25.788 01:36:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:25.788 01:36:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:25.788 01:36:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:25.788 01:36:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.788 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:25.788 ... 00:28:25.788 fio-3.35 00:28:25.788 Starting 3 threads 00:28:26.047 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.615 [2024-07-27 01:36:18.081871] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:26.615 [2024-07-27 01:36:18.081941] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:31.878 00:28:31.878 filename0: (groupid=0, jobs=1): err= 0: pid=767849: Sat Jul 27 01:36:23 2024 00:28:31.878 read: IOPS=211, BW=26.5MiB/s (27.8MB/s)(134MiB/5045msec) 00:28:31.878 slat (nsec): min=5869, max=42824, avg=11298.85, stdev=2787.47 00:28:31.879 clat (usec): min=5306, max=55906, avg=14098.58, stdev=13155.43 00:28:31.879 lat (usec): min=5318, max=55918, avg=14109.88, stdev=13155.35 00:28:31.879 clat percentiles (usec): 00:28:31.879 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7898], 00:28:31.879 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10159], 00:28:31.879 | 70.00th=[11207], 80.00th=[12518], 90.00th=[49021], 95.00th=[51119], 00:28:31.879 | 99.00th=[53216], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:28:31.879 | 99.99th=[55837] 00:28:31.879 bw ( KiB/s): min=21760, max=36096, per=36.33%, avg=27315.20, stdev=4740.50, samples=10 00:28:31.879 iops : min= 170, max= 282, avg=213.40, stdev=37.04, samples=10 00:28:31.879 lat (msec) : 10=55.29%, 20=33.58%, 50=3.74%, 100=7.39% 00:28:31.879 cpu : usr=91.49%, sys=7.79%, ctx=13, majf=0, minf=114 00:28:31.879 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.879 issued rwts: total=1069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.879 filename0: (groupid=0, jobs=1): err= 0: pid=767850: Sat Jul 27 01:36:23 2024 00:28:31.879 read: IOPS=194, BW=24.4MiB/s (25.5MB/s)(122MiB/5004msec) 00:28:31.879 slat (nsec): min=5874, max=72521, avg=11737.12, stdev=2901.93 00:28:31.879 clat (usec): min=5573, max=89574, avg=15377.68, stdev=14704.99 00:28:31.879 lat (usec): min=5584, max=89586, avg=15389.41, stdev=14704.86 00:28:31.879 clat percentiles (usec): 00:28:31.879 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 8094], 00:28:31.879 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10421], 00:28:31.879 | 70.00th=[12125], 80.00th=[13435], 90.00th=[50070], 95.00th=[52691], 00:28:31.879 | 99.00th=[55313], 99.50th=[56886], 99.90th=[89654], 99.95th=[89654], 00:28:31.879 | 99.99th=[89654] 00:28:31.879 bw ( KiB/s): min=14818, max=32256, per=32.72%, avg=24601.11, stdev=6011.80, samples=9 00:28:31.879 iops : min= 115, max= 252, avg=192.11, stdev=47.12, samples=9 00:28:31.879 lat (msec) : 10=56.00%, 20=30.56%, 50=2.87%, 100=10.56% 00:28:31.879 cpu : usr=92.64%, sys=6.72%, ctx=12, majf=0, minf=193 00:28:31.879 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.879 issued rwts: total=975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.879 filename0: (groupid=0, jobs=1): err= 0: pid=767851: Sat Jul 27 01:36:23 2024 00:28:31.879 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(115MiB/5043msec) 00:28:31.879 slat (nsec): min=5700, max=35661, avg=11190.53, stdev=2349.33 00:28:31.879 clat (usec): min=5470, max=92424, avg=16400.98, stdev=15434.22 00:28:31.879 lat (usec): min=5482, max=92435, avg=16412.17, stdev=15434.22 00:28:31.879 clat percentiles (usec): 00:28:31.879 | 1.00th=[ 6128], 5.00th=[ 6915], 10.00th=[ 7504], 20.00th=[ 8848], 00:28:31.879 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[11338], 00:28:31.879 | 70.00th=[12649], 80.00th=[14222], 90.00th=[50594], 95.00th=[52167], 00:28:31.879 | 99.00th=[56361], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:28:31.879 | 99.99th=[92799] 00:28:31.879 bw ( KiB/s): min=18176, max=34816, per=31.23%, avg=23480.50, stdev=5131.11, samples=10 00:28:31.879 iops : min= 142, max= 272, avg=183.40, stdev=40.06, samples=10 00:28:31.879 lat (msec) : 10=44.07%, 20=41.89%, 50=2.07%, 100=11.97% 00:28:31.879 cpu : usr=92.11%, sys=7.40%, ctx=9, majf=0, minf=80 00:28:31.879 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:31.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.879 issued rwts: total=919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:31.879 00:28:31.879 Run status group 0 (all jobs): 00:28:31.879 READ: bw=73.4MiB/s (77.0MB/s), 22.8MiB/s-26.5MiB/s (23.9MB/s-27.8MB/s), io=370MiB (388MB), run=5004-5045msec 00:28:31.879 01:36:23 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:31.879 01:36:23 -- target/dif.sh@43 -- # local sub 00:28:31.879 01:36:23 -- target/dif.sh@45 -- # for sub in "$@" 00:28:31.879 01:36:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:31.879 01:36:23 -- target/dif.sh@36 -- # local sub_id=0 00:28:31.879 01:36:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:31.879 01:36:23 -- target/dif.sh@109 -- # bs=4k 00:28:31.879 01:36:23 -- target/dif.sh@109 -- # numjobs=8 00:28:31.879 01:36:23 -- target/dif.sh@109 -- # iodepth=16 00:28:31.879 01:36:23 -- target/dif.sh@109 -- # runtime= 00:28:31.879 01:36:23 -- target/dif.sh@109 -- # files=2 00:28:31.879 01:36:23 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:31.879 01:36:23 -- target/dif.sh@28 -- # local sub 00:28:31.879 01:36:23 -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.879 01:36:23 -- target/dif.sh@31 -- # create_subsystem 0 00:28:31.879 01:36:23 -- target/dif.sh@18 -- # local sub_id=0 00:28:31.879 01:36:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 bdev_null0 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 [2024-07-27 01:36:23.503940] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.879 01:36:23 -- target/dif.sh@31 -- # create_subsystem 1 00:28:31.879 01:36:23 -- target/dif.sh@18 -- # local sub_id=1 00:28:31.879 01:36:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 bdev_null1 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.879 01:36:23 -- target/dif.sh@31 -- # create_subsystem 2 00:28:31.879 01:36:23 -- target/dif.sh@18 -- # local sub_id=2 00:28:31.879 01:36:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 bdev_null2 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:31.879 01:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.879 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:28:31.879 01:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.879 01:36:23 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:31.879 01:36:23 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:31.879 01:36:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:31.879 01:36:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.879 01:36:23 -- nvmf/common.sh@520 -- # config=() 00:28:31.879 01:36:23 -- target/dif.sh@82 -- # gen_fio_conf 00:28:31.880 01:36:23 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.880 01:36:23 -- nvmf/common.sh@520 -- # local subsystem config 00:28:31.880 01:36:23 -- target/dif.sh@54 -- # local file 00:28:31.880 01:36:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:31.880 01:36:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:31.880 01:36:23 -- target/dif.sh@56 -- # cat 00:28:31.880 01:36:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:31.880 { 00:28:31.880 "params": { 00:28:31.880 "name": "Nvme$subsystem", 00:28:31.880 "trtype": "$TEST_TRANSPORT", 00:28:31.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.880 "adrfam": "ipv4", 00:28:31.880 "trsvcid": "$NVMF_PORT", 00:28:31.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.880 "hdgst": ${hdgst:-false}, 00:28:31.880 "ddgst": ${ddgst:-false} 00:28:31.880 }, 00:28:31.880 "method": "bdev_nvme_attach_controller" 00:28:31.880 } 00:28:31.880 EOF 00:28:31.880 )") 00:28:31.880 01:36:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:31.880 01:36:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:31.880 01:36:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.880 01:36:23 -- common/autotest_common.sh@1320 -- # shift 00:28:31.880 01:36:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:31.880 01:36:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.880 01:36:23 -- nvmf/common.sh@542 -- # cat 00:28:31.880 01:36:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:31.880 01:36:23 -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.880 01:36:23 -- target/dif.sh@73 -- # cat 00:28:31.880 01:36:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.880 01:36:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:31.880 01:36:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:31.880 01:36:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:31.880 01:36:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:31.880 { 00:28:31.880 "params": { 00:28:31.880 "name": "Nvme$subsystem", 00:28:31.880 "trtype": "$TEST_TRANSPORT", 00:28:31.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.880 "adrfam": "ipv4", 00:28:31.880 "trsvcid": "$NVMF_PORT", 00:28:31.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.880 "hdgst": ${hdgst:-false}, 00:28:31.880 "ddgst": ${ddgst:-false} 00:28:31.880 }, 00:28:31.880 "method": "bdev_nvme_attach_controller" 00:28:31.880 } 00:28:31.880 EOF 00:28:31.880 )") 00:28:31.880 01:36:23 -- target/dif.sh@72 -- # (( file++ )) 00:28:31.880 01:36:23 -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.880 01:36:23 -- target/dif.sh@73 -- # cat 00:28:31.880 01:36:23 -- nvmf/common.sh@542 -- # cat 00:28:31.880 01:36:23 -- target/dif.sh@72 -- # (( file++ )) 00:28:31.880 01:36:23 -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.880 01:36:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:31.880 01:36:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:31.880 { 00:28:31.880 "params": { 00:28:31.880 "name": "Nvme$subsystem", 00:28:31.880 "trtype": "$TEST_TRANSPORT", 00:28:31.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.880 "adrfam": "ipv4", 00:28:31.880 "trsvcid": "$NVMF_PORT", 00:28:31.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.880 "hdgst": ${hdgst:-false}, 00:28:31.880 "ddgst": ${ddgst:-false} 00:28:31.880 }, 00:28:31.880 "method": "bdev_nvme_attach_controller" 00:28:31.880 } 00:28:31.880 EOF 00:28:31.880 )") 00:28:31.880 01:36:23 -- nvmf/common.sh@542 -- # cat 00:28:31.880 01:36:23 -- nvmf/common.sh@544 -- # jq . 00:28:31.880 01:36:23 -- nvmf/common.sh@545 -- # IFS=, 00:28:31.880 01:36:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:31.880 "params": { 00:28:31.880 "name": "Nvme0", 00:28:31.880 "trtype": "tcp", 00:28:31.880 "traddr": "10.0.0.2", 00:28:31.880 "adrfam": "ipv4", 00:28:31.880 "trsvcid": "4420", 00:28:31.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:31.880 "hdgst": false, 00:28:31.880 "ddgst": false 00:28:31.880 }, 00:28:31.880 "method": "bdev_nvme_attach_controller" 00:28:31.880 },{ 00:28:31.880 "params": { 00:28:31.880 "name": "Nvme1", 00:28:31.880 "trtype": "tcp", 00:28:31.880 "traddr": "10.0.0.2", 00:28:31.880 "adrfam": "ipv4", 00:28:31.880 "trsvcid": "4420", 00:28:31.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:31.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:31.880 "hdgst": false, 00:28:31.880 "ddgst": false 00:28:31.880 }, 00:28:31.880 "method": "bdev_nvme_attach_controller" 00:28:31.880 },{ 00:28:31.880 "params": { 00:28:31.880 "name": "Nvme2", 00:28:31.880 "trtype": "tcp", 00:28:31.880 "traddr": "10.0.0.2", 00:28:31.880 "adrfam": "ipv4", 00:28:31.880 "trsvcid": "4420", 00:28:31.880 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:31.880 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:31.880 "hdgst": false, 00:28:31.880 "ddgst": false 00:28:31.880 }, 00:28:31.880 "method": "bdev_nvme_attach_controller" 00:28:31.880 }' 00:28:31.880 01:36:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:31.880 01:36:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:31.880 01:36:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.880 01:36:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.880 01:36:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:31.880 01:36:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:31.880 01:36:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:31.880 01:36:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:31.880 01:36:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:31.880 01:36:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.138 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:32.138 ... 00:28:32.138 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:32.138 ... 00:28:32.138 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:32.138 ... 00:28:32.138 fio-3.35 00:28:32.138 Starting 24 threads 00:28:32.138 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.706 [2024-07-27 01:36:24.462871] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:32.706 [2024-07-27 01:36:24.462951] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:44.927 00:28:44.927 filename0: (groupid=0, jobs=1): err= 0: pid=768729: Sat Jul 27 01:36:34 2024 00:28:44.927 read: IOPS=505, BW=2023KiB/s (2072kB/s)(19.8MiB/10028msec) 00:28:44.927 slat (usec): min=6, max=171, avg=59.98, stdev=31.00 00:28:44.927 clat (usec): min=2308, max=59955, avg=31103.50, stdev=3419.83 00:28:44.927 lat (usec): min=2320, max=59988, avg=31163.48, stdev=3422.34 00:28:44.927 clat percentiles (usec): 00:28:44.928 | 1.00th=[ 8979], 5.00th=[29754], 10.00th=[30278], 20.00th=[30802], 00:28:44.928 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:28:44.928 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[32900], 00:28:44.928 | 99.00th=[39060], 99.50th=[47449], 99.90th=[55837], 99.95th=[58983], 00:28:44.928 | 99.99th=[60031] 00:28:44.928 bw ( KiB/s): min= 1904, max= 2176, per=4.21%, avg=2022.40, stdev=68.95, samples=20 00:28:44.928 iops : min= 476, max= 544, avg=505.60, stdev=17.24, samples=20 00:28:44.928 lat (msec) : 4=0.12%, 10=1.01%, 20=0.41%, 50=98.23%, 100=0.24% 00:28:44.928 cpu : usr=98.46%, sys=1.09%, ctx=13, majf=0, minf=45 00:28:44.928 IO depths : 1=4.8%, 2=11.0%, 4=24.7%, 8=51.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:28:44.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.928 filename0: (groupid=0, jobs=1): err= 0: pid=768730: Sat Jul 27 01:36:34 2024 00:28:44.928 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10024msec) 00:28:44.928 slat (usec): min=7, max=115, avg=36.94, stdev=15.00 00:28:44.928 clat (usec): min=18388, max=52647, avg=31602.15, stdev=1805.75 00:28:44.928 lat (usec): min=18403, max=52672, avg=31639.09, stdev=1805.32 00:28:44.928 clat percentiles (usec): 00:28:44.928 | 1.00th=[26870], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:28:44.928 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.928 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.928 | 99.00th=[36963], 99.50th=[39584], 99.90th=[51119], 99.95th=[51643], 00:28:44.928 | 99.99th=[52691] 00:28:44.928 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=2003.20, stdev=72.60, samples=20 00:28:44.928 iops : min= 448, max= 512, avg=500.80, stdev=18.15, samples=20 00:28:44.928 lat (msec) : 20=0.04%, 50=99.64%, 100=0.32% 00:28:44.928 cpu : usr=98.14%, sys=1.29%, ctx=99, majf=0, minf=49 00:28:44.928 IO depths : 1=4.4%, 2=10.3%, 4=23.6%, 8=53.6%, 16=8.1%, 32=0.0%, >=64=0.0% 00:28:44.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.928 filename0: (groupid=0, jobs=1): err= 0: pid=768731: Sat Jul 27 01:36:34 2024 00:28:44.928 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10024msec) 00:28:44.928 slat (usec): min=7, max=113, avg=34.77, stdev=11.84 00:28:44.928 clat (usec): min=17880, max=55857, avg=31642.74, stdev=1797.56 00:28:44.928 lat (usec): min=17890, max=55890, avg=31677.52, stdev=1796.77 00:28:44.928 clat percentiles (usec): 00:28:44.928 | 1.00th=[28181], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:28:44.928 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:28:44.928 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.928 | 99.00th=[36439], 99.50th=[38536], 99.90th=[51643], 99.95th=[52167], 00:28:44.928 | 99.99th=[55837] 00:28:44.928 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=2003.20, stdev=75.15, samples=20 00:28:44.928 iops : min= 448, max= 512, avg=500.80, stdev=18.79, samples=20 00:28:44.928 lat (msec) : 20=0.12%, 50=99.48%, 100=0.40% 00:28:44.928 cpu : usr=95.53%, sys=2.30%, ctx=270, majf=0, minf=75 00:28:44.928 IO depths : 1=4.1%, 2=10.3%, 4=24.9%, 8=52.3%, 16=8.4%, 32=0.0%, >=64=0.0% 00:28:44.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.928 filename0: (groupid=0, jobs=1): err= 0: pid=768732: Sat Jul 27 01:36:34 2024 00:28:44.928 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10024msec) 00:28:44.928 slat (usec): min=7, max=724, avg=44.19, stdev=20.49 00:28:44.928 clat (usec): min=25319, max=51477, avg=31526.99, stdev=1427.43 00:28:44.928 lat (usec): min=25347, max=51509, avg=31571.18, stdev=1428.10 00:28:44.928 clat percentiles (usec): 00:28:44.928 | 1.00th=[29754], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:28:44.928 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.928 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[32900], 00:28:44.928 | 99.00th=[34866], 99.50th=[36963], 99.90th=[51119], 99.95th=[51643], 00:28:44.928 | 99.99th=[51643] 00:28:44.928 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=2003.20, stdev=75.15, samples=20 00:28:44.928 iops : min= 448, max= 512, avg=500.80, stdev=18.79, samples=20 00:28:44.928 lat (msec) : 50=99.68%, 100=0.32% 00:28:44.928 cpu : usr=93.92%, sys=3.18%, ctx=286, majf=0, minf=53 00:28:44.928 IO depths : 1=5.7%, 2=11.9%, 4=24.8%, 8=50.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:44.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.928 filename0: (groupid=0, jobs=1): err= 0: pid=768733: Sat Jul 27 01:36:34 2024 00:28:44.928 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10001msec) 00:28:44.928 slat (usec): min=7, max=884, avg=36.54, stdev=24.30 00:28:44.928 clat (usec): min=5694, max=63870, avg=31676.27, stdev=4023.96 00:28:44.928 lat (usec): min=5708, max=63910, avg=31712.81, stdev=4023.77 00:28:44.928 clat percentiles (usec): 00:28:44.928 | 1.00th=[15795], 5.00th=[30016], 10.00th=[30540], 20.00th=[31065], 00:28:44.928 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:28:44.928 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.928 | 99.00th=[52167], 99.50th=[54789], 99.90th=[62653], 99.95th=[63701], 00:28:44.928 | 99.99th=[63701] 00:28:44.928 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1999.32, stdev=68.16, samples=19 00:28:44.928 iops : min= 448, max= 512, avg=499.79, stdev=17.16, samples=19 00:28:44.928 lat (msec) : 10=0.84%, 20=0.42%, 50=97.48%, 100=1.26% 00:28:44.928 cpu : usr=95.72%, sys=2.28%, ctx=76, majf=0, minf=44 00:28:44.928 IO depths : 1=2.2%, 2=7.3%, 4=22.0%, 8=58.0%, 16=10.6%, 32=0.0%, >=64=0.0% 00:28:44.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.928 filename0: (groupid=0, jobs=1): err= 0: pid=768734: Sat Jul 27 01:36:34 2024 00:28:44.928 read: IOPS=506, BW=2027KiB/s (2075kB/s)(19.8MiB/10023msec) 00:28:44.928 slat (usec): min=7, max=166, avg=30.03, stdev=20.73 00:28:44.928 clat (usec): min=7187, max=48928, avg=31331.79, stdev=2740.02 00:28:44.928 lat (usec): min=7210, max=48988, avg=31361.81, stdev=2741.77 00:28:44.928 clat percentiles (usec): 00:28:44.928 | 1.00th=[13566], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:28:44.928 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:28:44.928 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33162], 00:28:44.928 | 99.00th=[34866], 99.50th=[36439], 99.90th=[48497], 99.95th=[49021], 00:28:44.928 | 99.99th=[49021] 00:28:44.928 bw ( KiB/s): min= 1920, max= 2224, per=4.21%, avg=2024.80, stdev=72.02, samples=20 00:28:44.928 iops : min= 480, max= 556, avg=506.20, stdev=18.00, samples=20 00:28:44.928 lat (msec) : 10=0.71%, 20=0.63%, 50=98.66% 00:28:44.928 cpu : usr=98.41%, sys=1.18%, ctx=14, majf=0, minf=33 00:28:44.928 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:44.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 issued rwts: total=5078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.928 filename0: (groupid=0, jobs=1): err= 0: pid=768735: Sat Jul 27 01:36:34 2024 00:28:44.928 read: IOPS=499, BW=2000KiB/s (2048kB/s)(19.5MiB/10010msec) 00:28:44.928 slat (usec): min=7, max=129, avg=39.25, stdev=16.89 00:28:44.928 clat (usec): min=16215, max=72917, avg=31686.12, stdev=2872.04 00:28:44.928 lat (usec): min=16223, max=72951, avg=31725.37, stdev=2871.53 00:28:44.928 clat percentiles (usec): 00:28:44.928 | 1.00th=[27657], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:28:44.928 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.928 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.928 | 99.00th=[37487], 99.50th=[49021], 99.90th=[72877], 99.95th=[72877], 00:28:44.928 | 99.99th=[72877] 00:28:44.928 bw ( KiB/s): min= 1779, max= 2048, per=4.14%, avg=1992.58, stdev=76.71, samples=19 00:28:44.928 iops : min= 444, max= 512, avg=498.11, stdev=19.29, samples=19 00:28:44.928 lat (msec) : 20=0.20%, 50=99.40%, 100=0.40% 00:28:44.928 cpu : usr=98.63%, sys=0.98%, ctx=28, majf=0, minf=46 00:28:44.928 IO depths : 1=2.4%, 2=8.4%, 4=24.2%, 8=54.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:28:44.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.928 issued rwts: total=5004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.928 filename0: (groupid=0, jobs=1): err= 0: pid=768736: Sat Jul 27 01:36:34 2024 00:28:44.928 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10011msec) 00:28:44.928 slat (usec): min=8, max=276, avg=41.44, stdev=20.62 00:28:44.928 clat (usec): min=12608, max=73384, avg=31589.84, stdev=2543.78 00:28:44.928 lat (usec): min=12627, max=73418, avg=31631.29, stdev=2542.77 00:28:44.928 clat percentiles (usec): 00:28:44.928 | 1.00th=[29230], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:28:44.928 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.928 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[32900], 00:28:44.928 | 99.00th=[33817], 99.50th=[34341], 99.90th=[72877], 99.95th=[72877], 00:28:44.928 | 99.99th=[72877] 00:28:44.929 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1994.11, stdev=77.69, samples=19 00:28:44.929 iops : min= 448, max= 512, avg=498.53, stdev=19.42, samples=19 00:28:44.929 lat (msec) : 20=0.04%, 50=99.60%, 100=0.36% 00:28:44.929 cpu : usr=90.59%, sys=4.26%, ctx=234, majf=0, minf=60 00:28:44.929 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.929 filename1: (groupid=0, jobs=1): err= 0: pid=768737: Sat Jul 27 01:36:34 2024 00:28:44.929 read: IOPS=511, BW=2047KiB/s (2097kB/s)(20.1MiB/10028msec) 00:28:44.929 slat (usec): min=5, max=566, avg=31.20, stdev=19.06 00:28:44.929 clat (usec): min=3535, max=55846, avg=31000.97, stdev=4116.89 00:28:44.929 lat (usec): min=3551, max=55854, avg=31032.18, stdev=4119.19 00:28:44.929 clat percentiles (usec): 00:28:44.929 | 1.00th=[ 8094], 5.00th=[27919], 10.00th=[30278], 20.00th=[30802], 00:28:44.929 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.929 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.929 | 99.00th=[38536], 99.50th=[45876], 99.90th=[54264], 99.95th=[55313], 00:28:44.929 | 99.99th=[55837] 00:28:44.929 bw ( KiB/s): min= 1920, max= 2280, per=4.26%, avg=2046.80, stdev=79.63, samples=20 00:28:44.929 iops : min= 480, max= 570, avg=511.70, stdev=19.91, samples=20 00:28:44.929 lat (msec) : 4=0.14%, 10=1.21%, 20=1.64%, 50=96.84%, 100=0.18% 00:28:44.929 cpu : usr=92.81%, sys=3.57%, ctx=252, majf=0, minf=72 00:28:44.929 IO depths : 1=4.3%, 2=9.6%, 4=23.2%, 8=54.6%, 16=8.3%, 32=0.0%, >=64=0.0% 00:28:44.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 issued rwts: total=5133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.929 filename1: (groupid=0, jobs=1): err= 0: pid=768738: Sat Jul 27 01:36:34 2024 00:28:44.929 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10024msec) 00:28:44.929 slat (usec): min=8, max=133, avg=41.96, stdev=16.82 00:28:44.929 clat (usec): min=16937, max=51546, avg=31542.71, stdev=1598.78 00:28:44.929 lat (usec): min=16946, max=51592, avg=31584.67, stdev=1598.69 00:28:44.929 clat percentiles (usec): 00:28:44.929 | 1.00th=[28967], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:28:44.929 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.929 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[33162], 00:28:44.929 | 99.00th=[34341], 99.50th=[39060], 99.90th=[51643], 99.95th=[51643], 00:28:44.929 | 99.99th=[51643] 00:28:44.929 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=2003.20, stdev=75.15, samples=20 00:28:44.929 iops : min= 448, max= 512, avg=500.80, stdev=18.79, samples=20 00:28:44.929 lat (msec) : 20=0.08%, 50=99.56%, 100=0.36% 00:28:44.929 cpu : usr=97.55%, sys=1.49%, ctx=54, majf=0, minf=48 00:28:44.929 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:44.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.929 filename1: (groupid=0, jobs=1): err= 0: pid=768739: Sat Jul 27 01:36:34 2024 00:28:44.929 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10018msec) 00:28:44.929 slat (usec): min=7, max=136, avg=36.21, stdev=14.85 00:28:44.929 clat (usec): min=14461, max=81513, avg=31617.90, stdev=3784.39 00:28:44.929 lat (usec): min=14470, max=81533, avg=31654.11, stdev=3784.44 00:28:44.929 clat percentiles (usec): 00:28:44.929 | 1.00th=[19530], 5.00th=[30016], 10.00th=[30540], 20.00th=[31065], 00:28:44.929 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.929 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.929 | 99.00th=[43254], 99.50th=[52167], 99.90th=[80217], 99.95th=[81265], 00:28:44.929 | 99.99th=[81265] 00:28:44.929 bw ( KiB/s): min= 1664, max= 2096, per=4.16%, avg=2001.30, stdev=95.71, samples=20 00:28:44.929 iops : min= 416, max= 524, avg=500.25, stdev=23.90, samples=20 00:28:44.929 lat (msec) : 20=1.12%, 50=98.33%, 100=0.56% 00:28:44.929 cpu : usr=93.81%, sys=2.97%, ctx=182, majf=0, minf=44 00:28:44.929 IO depths : 1=3.6%, 2=9.1%, 4=22.4%, 8=55.7%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:44.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.929 filename1: (groupid=0, jobs=1): err= 0: pid=768740: Sat Jul 27 01:36:34 2024 00:28:44.929 read: IOPS=503, BW=2013KiB/s (2061kB/s)(19.7MiB/10023msec) 00:28:44.929 slat (usec): min=5, max=105, avg=30.22, stdev=16.45 00:28:44.929 clat (usec): min=6104, max=51780, avg=31563.05, stdev=2429.50 00:28:44.929 lat (usec): min=6112, max=51795, avg=31593.28, stdev=2428.89 00:28:44.929 clat percentiles (usec): 00:28:44.929 | 1.00th=[25297], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:28:44.929 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:28:44.929 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.929 | 99.00th=[38011], 99.50th=[44827], 99.90th=[51643], 99.95th=[51643], 00:28:44.929 | 99.99th=[51643] 00:28:44.929 bw ( KiB/s): min= 1792, max= 2096, per=4.18%, avg=2010.80, stdev=74.79, samples=20 00:28:44.929 iops : min= 448, max= 524, avg=502.70, stdev=18.70, samples=20 00:28:44.929 lat (msec) : 10=0.26%, 20=0.38%, 50=99.05%, 100=0.32% 00:28:44.929 cpu : usr=98.55%, sys=1.04%, ctx=22, majf=0, minf=50 00:28:44.929 IO depths : 1=2.8%, 2=7.8%, 4=21.1%, 8=57.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:28:44.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 complete : 0=0.0%, 4=93.4%, 8=1.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 issued rwts: total=5043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.929 filename1: (groupid=0, jobs=1): err= 0: pid=768741: Sat Jul 27 01:36:34 2024 00:28:44.929 read: IOPS=492, BW=1968KiB/s (2016kB/s)(19.2MiB/10006msec) 00:28:44.929 slat (usec): min=7, max=1030, avg=39.75, stdev=30.52 00:28:44.929 clat (usec): min=6500, max=68929, avg=32223.39, stdev=4776.55 00:28:44.929 lat (usec): min=6510, max=68966, avg=32263.14, stdev=4778.64 00:28:44.929 clat percentiles (usec): 00:28:44.929 | 1.00th=[20579], 5.00th=[29754], 10.00th=[30540], 20.00th=[30802], 00:28:44.929 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31851], 00:28:44.929 | 70.00th=[31851], 80.00th=[32375], 90.00th=[33817], 95.00th=[37487], 00:28:44.929 | 99.00th=[54264], 99.50th=[57410], 99.90th=[67634], 99.95th=[68682], 00:28:44.929 | 99.99th=[68682] 00:28:44.929 bw ( KiB/s): min= 1728, max= 2096, per=4.09%, avg=1964.63, stdev=96.75, samples=19 00:28:44.929 iops : min= 432, max= 524, avg=491.16, stdev=24.19, samples=19 00:28:44.929 lat (msec) : 10=0.39%, 20=0.55%, 50=96.93%, 100=2.13% 00:28:44.929 cpu : usr=96.37%, sys=2.08%, ctx=139, majf=0, minf=48 00:28:44.929 IO depths : 1=1.4%, 2=5.5%, 4=17.2%, 8=63.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:28:44.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 complete : 0=0.0%, 4=92.6%, 8=2.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.929 filename1: (groupid=0, jobs=1): err= 0: pid=768742: Sat Jul 27 01:36:34 2024 00:28:44.929 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.0MiB/10002msec) 00:28:44.929 slat (nsec): min=7744, max=91356, avg=24104.17, stdev=14181.56 00:28:44.929 clat (usec): min=1582, max=63642, avg=32712.56, stdev=5540.77 00:28:44.929 lat (usec): min=1592, max=63678, avg=32736.66, stdev=5540.17 00:28:44.929 clat percentiles (usec): 00:28:44.929 | 1.00th=[17695], 5.00th=[30016], 10.00th=[30540], 20.00th=[31065], 00:28:44.929 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:28:44.929 | 70.00th=[32113], 80.00th=[32637], 90.00th=[35914], 95.00th=[44303], 00:28:44.929 | 99.00th=[57410], 99.50th=[61080], 99.90th=[63177], 99.95th=[63177], 00:28:44.929 | 99.99th=[63701] 00:28:44.929 bw ( KiB/s): min= 1792, max= 2048, per=4.04%, avg=1940.37, stdev=84.94, samples=19 00:28:44.929 iops : min= 448, max= 512, avg=485.05, stdev=21.30, samples=19 00:28:44.929 lat (msec) : 2=0.08%, 10=0.33%, 20=0.86%, 50=95.96%, 100=2.77% 00:28:44.929 cpu : usr=98.04%, sys=1.39%, ctx=112, majf=0, minf=79 00:28:44.929 IO depths : 1=0.2%, 2=0.6%, 4=5.6%, 8=78.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:28:44.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 complete : 0=0.0%, 4=89.7%, 8=7.2%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.929 issued rwts: total=4874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.929 filename1: (groupid=0, jobs=1): err= 0: pid=768743: Sat Jul 27 01:36:34 2024 00:28:44.929 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10023msec) 00:28:44.929 slat (usec): min=7, max=398, avg=30.69, stdev=17.30 00:28:44.929 clat (usec): min=12534, max=51442, avg=31675.25, stdev=2559.71 00:28:44.929 lat (usec): min=12543, max=51465, avg=31705.94, stdev=2559.65 00:28:44.929 clat percentiles (usec): 00:28:44.929 | 1.00th=[25560], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:28:44.929 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:28:44.929 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33424], 00:28:44.929 | 99.00th=[45351], 99.50th=[48497], 99.90th=[51119], 99.95th=[51643], 00:28:44.929 | 99.99th=[51643] 00:28:44.930 bw ( KiB/s): min= 1792, max= 2064, per=4.17%, avg=2003.20, stdev=75.33, samples=20 00:28:44.930 iops : min= 448, max= 516, avg=500.80, stdev=18.83, samples=20 00:28:44.930 lat (msec) : 20=0.64%, 50=99.00%, 100=0.36% 00:28:44.930 cpu : usr=95.34%, sys=2.33%, ctx=119, majf=0, minf=48 00:28:44.930 IO depths : 1=2.9%, 2=8.7%, 4=24.1%, 8=54.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:28:44.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.930 filename1: (groupid=0, jobs=1): err= 0: pid=768744: Sat Jul 27 01:36:34 2024 00:28:44.930 read: IOPS=507, BW=2030KiB/s (2079kB/s)(19.9MiB/10023msec) 00:28:44.930 slat (usec): min=7, max=149, avg=23.85, stdev=19.49 00:28:44.930 clat (usec): min=7346, max=58999, avg=31384.14, stdev=4031.49 00:28:44.930 lat (usec): min=7369, max=59065, avg=31407.99, stdev=4030.56 00:28:44.930 clat percentiles (usec): 00:28:44.930 | 1.00th=[13173], 5.00th=[27132], 10.00th=[30278], 20.00th=[30802], 00:28:44.930 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31851], 00:28:44.930 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[34341], 00:28:44.930 | 99.00th=[43779], 99.50th=[48497], 99.90th=[58459], 99.95th=[58459], 00:28:44.930 | 99.99th=[58983] 00:28:44.930 bw ( KiB/s): min= 1920, max= 2176, per=4.22%, avg=2028.40, stdev=70.72, samples=20 00:28:44.930 iops : min= 480, max= 544, avg=507.10, stdev=17.68, samples=20 00:28:44.930 lat (msec) : 10=0.71%, 20=1.79%, 50=97.15%, 100=0.35% 00:28:44.930 cpu : usr=98.23%, sys=1.34%, ctx=16, majf=0, minf=31 00:28:44.930 IO depths : 1=1.3%, 2=2.7%, 4=10.3%, 8=74.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:28:44.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 complete : 0=0.0%, 4=90.1%, 8=4.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 issued rwts: total=5087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.930 filename2: (groupid=0, jobs=1): err= 0: pid=768745: Sat Jul 27 01:36:34 2024 00:28:44.930 read: IOPS=511, BW=2045KiB/s (2095kB/s)(20.0MiB/10024msec) 00:28:44.930 slat (usec): min=7, max=107, avg=25.70, stdev=14.96 00:28:44.930 clat (usec): min=12993, max=62121, avg=31094.78, stdev=3819.13 00:28:44.930 lat (usec): min=13002, max=62140, avg=31120.48, stdev=3821.31 00:28:44.930 clat percentiles (usec): 00:28:44.930 | 1.00th=[16581], 5.00th=[23987], 10.00th=[29230], 20.00th=[30802], 00:28:44.930 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:28:44.930 | 70.00th=[31851], 80.00th=[32113], 90.00th=[33162], 95.00th=[34341], 00:28:44.930 | 99.00th=[39584], 99.50th=[47449], 99.90th=[62129], 99.95th=[62129], 00:28:44.930 | 99.99th=[62129] 00:28:44.930 bw ( KiB/s): min= 1792, max= 2320, per=4.25%, avg=2044.00, stdev=124.79, samples=20 00:28:44.930 iops : min= 448, max= 580, avg=511.00, stdev=31.20, samples=20 00:28:44.930 lat (msec) : 20=2.97%, 50=96.57%, 100=0.47% 00:28:44.930 cpu : usr=98.19%, sys=1.41%, ctx=15, majf=0, minf=50 00:28:44.930 IO depths : 1=3.7%, 2=7.5%, 4=17.1%, 8=62.6%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:44.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 issued rwts: total=5126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.930 filename2: (groupid=0, jobs=1): err= 0: pid=768746: Sat Jul 27 01:36:34 2024 00:28:44.930 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10001msec) 00:28:44.930 slat (usec): min=8, max=166, avg=61.87, stdev=33.64 00:28:44.930 clat (usec): min=8000, max=63241, avg=31996.37, stdev=4208.55 00:28:44.930 lat (usec): min=8031, max=63288, avg=32058.24, stdev=4207.66 00:28:44.930 clat percentiles (usec): 00:28:44.930 | 1.00th=[18744], 5.00th=[30016], 10.00th=[30540], 20.00th=[31065], 00:28:44.930 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.930 | 70.00th=[31851], 80.00th=[32113], 90.00th=[33162], 95.00th=[37487], 00:28:44.930 | 99.00th=[52691], 99.50th=[54789], 99.90th=[63177], 99.95th=[63177], 00:28:44.930 | 99.99th=[63177] 00:28:44.930 bw ( KiB/s): min= 1776, max= 2096, per=4.10%, avg=1971.37, stdev=88.25, samples=19 00:28:44.930 iops : min= 444, max= 524, avg=492.84, stdev=22.06, samples=19 00:28:44.930 lat (msec) : 10=0.02%, 20=1.01%, 50=97.77%, 100=1.19% 00:28:44.930 cpu : usr=98.42%, sys=1.14%, ctx=13, majf=0, minf=52 00:28:44.930 IO depths : 1=0.7%, 2=5.3%, 4=19.3%, 8=61.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:28:44.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 complete : 0=0.0%, 4=93.1%, 8=2.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.930 filename2: (groupid=0, jobs=1): err= 0: pid=768747: Sat Jul 27 01:36:34 2024 00:28:44.930 read: IOPS=504, BW=2017KiB/s (2066kB/s)(19.8MiB/10025msec) 00:28:44.930 slat (usec): min=7, max=189, avg=34.92, stdev=20.62 00:28:44.930 clat (usec): min=6842, max=50555, avg=31428.10, stdev=3311.14 00:28:44.930 lat (usec): min=6866, max=50572, avg=31463.02, stdev=3312.29 00:28:44.930 clat percentiles (usec): 00:28:44.930 | 1.00th=[17433], 5.00th=[30016], 10.00th=[30540], 20.00th=[30802], 00:28:44.930 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:28:44.930 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.930 | 99.00th=[43779], 99.50th=[45876], 99.90th=[47449], 99.95th=[49546], 00:28:44.930 | 99.99th=[50594] 00:28:44.930 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=2016.00, stdev=55.18, samples=20 00:28:44.930 iops : min= 480, max= 512, avg=504.00, stdev=13.80, samples=20 00:28:44.930 lat (msec) : 10=0.59%, 20=1.33%, 50=98.04%, 100=0.04% 00:28:44.930 cpu : usr=93.69%, sys=3.02%, ctx=112, majf=0, minf=31 00:28:44.930 IO depths : 1=4.0%, 2=9.1%, 4=21.6%, 8=56.8%, 16=8.5%, 32=0.0%, >=64=0.0% 00:28:44.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.930 filename2: (groupid=0, jobs=1): err= 0: pid=768748: Sat Jul 27 01:36:34 2024 00:28:44.930 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10013msec) 00:28:44.930 slat (usec): min=7, max=124, avg=39.45, stdev=16.45 00:28:44.930 clat (usec): min=25542, max=75493, avg=31639.21, stdev=2617.52 00:28:44.930 lat (usec): min=25578, max=75535, avg=31678.65, stdev=2616.14 00:28:44.930 clat percentiles (usec): 00:28:44.930 | 1.00th=[29230], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:28:44.930 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.930 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[33162], 00:28:44.930 | 99.00th=[34341], 99.50th=[35390], 99.90th=[74974], 99.95th=[74974], 00:28:44.930 | 99.99th=[74974] 00:28:44.930 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1996.80, stdev=76.58, samples=20 00:28:44.930 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:28:44.930 lat (msec) : 50=99.68%, 100=0.32% 00:28:44.930 cpu : usr=98.55%, sys=1.07%, ctx=14, majf=0, minf=48 00:28:44.930 IO depths : 1=5.7%, 2=11.6%, 4=23.6%, 8=52.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:44.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.930 filename2: (groupid=0, jobs=1): err= 0: pid=768749: Sat Jul 27 01:36:34 2024 00:28:44.930 read: IOPS=497, BW=1989KiB/s (2036kB/s)(19.4MiB/10001msec) 00:28:44.930 slat (usec): min=7, max=1297, avg=41.15, stdev=29.77 00:28:44.930 clat (usec): min=5871, max=63153, avg=31875.36, stdev=3272.78 00:28:44.930 lat (usec): min=5895, max=63186, avg=31916.51, stdev=3273.45 00:28:44.930 clat percentiles (usec): 00:28:44.930 | 1.00th=[26346], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:28:44.930 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.930 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[34341], 00:28:44.930 | 99.00th=[50070], 99.50th=[54789], 99.90th=[63177], 99.95th=[63177], 00:28:44.930 | 99.99th=[63177] 00:28:44.930 bw ( KiB/s): min= 1776, max= 2064, per=4.13%, avg=1985.68, stdev=75.59, samples=19 00:28:44.930 iops : min= 444, max= 516, avg=496.42, stdev=18.90, samples=19 00:28:44.930 lat (msec) : 10=0.08%, 20=0.20%, 50=98.63%, 100=1.09% 00:28:44.930 cpu : usr=95.74%, sys=2.25%, ctx=189, majf=0, minf=60 00:28:44.930 IO depths : 1=1.4%, 2=6.2%, 4=20.0%, 8=60.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:28:44.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 complete : 0=0.0%, 4=93.4%, 8=2.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 issued rwts: total=4972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.930 filename2: (groupid=0, jobs=1): err= 0: pid=768750: Sat Jul 27 01:36:34 2024 00:28:44.930 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10005msec) 00:28:44.930 slat (usec): min=7, max=336, avg=36.87, stdev=16.86 00:28:44.930 clat (usec): min=8700, max=68473, avg=31691.01, stdev=3604.31 00:28:44.930 lat (usec): min=8722, max=68504, avg=31727.88, stdev=3604.40 00:28:44.930 clat percentiles (usec): 00:28:44.930 | 1.00th=[20055], 5.00th=[30016], 10.00th=[30540], 20.00th=[31065], 00:28:44.930 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.930 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.930 | 99.00th=[51119], 99.50th=[52691], 99.90th=[66847], 99.95th=[66847], 00:28:44.930 | 99.99th=[68682] 00:28:44.930 bw ( KiB/s): min= 1840, max= 2048, per=4.16%, avg=1999.16, stdev=62.31, samples=19 00:28:44.930 iops : min= 460, max= 512, avg=499.79, stdev=15.58, samples=19 00:28:44.930 lat (msec) : 10=0.08%, 20=0.92%, 50=97.80%, 100=1.20% 00:28:44.930 cpu : usr=89.00%, sys=4.94%, ctx=383, majf=0, minf=48 00:28:44.930 IO depths : 1=2.3%, 2=7.2%, 4=20.1%, 8=59.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:28:44.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.930 complete : 0=0.0%, 4=93.1%, 8=2.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.931 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.931 filename2: (groupid=0, jobs=1): err= 0: pid=768751: Sat Jul 27 01:36:34 2024 00:28:44.931 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10024msec) 00:28:44.931 slat (usec): min=8, max=122, avg=40.77, stdev=15.92 00:28:44.931 clat (usec): min=19909, max=51437, avg=31589.18, stdev=1573.29 00:28:44.931 lat (usec): min=19919, max=51464, avg=31629.95, stdev=1572.31 00:28:44.931 clat percentiles (usec): 00:28:44.931 | 1.00th=[28705], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:28:44.931 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:28:44.931 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33162], 00:28:44.931 | 99.00th=[35390], 99.50th=[38536], 99.90th=[51119], 99.95th=[51643], 00:28:44.931 | 99.99th=[51643] 00:28:44.931 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=2003.20, stdev=75.15, samples=20 00:28:44.931 iops : min= 448, max= 512, avg=500.80, stdev=18.79, samples=20 00:28:44.931 lat (msec) : 20=0.04%, 50=99.64%, 100=0.32% 00:28:44.931 cpu : usr=98.62%, sys=0.98%, ctx=13, majf=0, minf=32 00:28:44.931 IO depths : 1=3.1%, 2=9.3%, 4=24.8%, 8=53.4%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:44.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.931 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.931 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.931 filename2: (groupid=0, jobs=1): err= 0: pid=768752: Sat Jul 27 01:36:34 2024 00:28:44.931 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10023msec) 00:28:44.931 slat (usec): min=7, max=121, avg=31.70, stdev=17.37 00:28:44.931 clat (usec): min=25063, max=52106, avg=31651.60, stdev=1640.49 00:28:44.931 lat (usec): min=25073, max=52133, avg=31683.30, stdev=1638.60 00:28:44.931 clat percentiles (usec): 00:28:44.931 | 1.00th=[27919], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:28:44.931 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:28:44.931 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[33817], 00:28:44.931 | 99.00th=[36963], 99.50th=[37487], 99.90th=[52167], 99.95th=[52167], 00:28:44.931 | 99.99th=[52167] 00:28:44.931 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=2003.20, stdev=73.89, samples=20 00:28:44.931 iops : min= 448, max= 512, avg=500.80, stdev=18.47, samples=20 00:28:44.931 lat (msec) : 50=99.68%, 100=0.32% 00:28:44.931 cpu : usr=98.48%, sys=1.12%, ctx=15, majf=0, minf=36 00:28:44.931 IO depths : 1=4.0%, 2=10.0%, 4=23.9%, 8=53.6%, 16=8.5%, 32=0.0%, >=64=0.0% 00:28:44.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.931 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.931 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.931 00:28:44.931 Run status group 0 (all jobs): 00:28:44.931 READ: bw=46.9MiB/s (49.2MB/s), 1949KiB/s-2047KiB/s (1996kB/s-2097kB/s), io=471MiB (494MB), run=10001-10028msec 00:28:44.931 01:36:34 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:44.931 01:36:34 -- target/dif.sh@43 -- # local sub 00:28:44.931 01:36:34 -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.931 01:36:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:44.931 01:36:34 -- target/dif.sh@36 -- # local sub_id=0 00:28:44.931 01:36:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:44.931 01:36:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:34 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:44.931 01:36:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:34 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:34 -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.931 01:36:34 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:44.931 01:36:34 -- target/dif.sh@36 -- # local sub_id=1 00:28:44.931 01:36:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.931 01:36:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:34 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:44.931 01:36:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:34 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:34 -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.931 01:36:34 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:44.931 01:36:34 -- target/dif.sh@36 -- # local sub_id=2 00:28:44.931 01:36:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:44.931 01:36:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:34 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:44.931 01:36:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:34 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:35 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:44.931 01:36:35 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:44.931 01:36:35 -- target/dif.sh@115 -- # numjobs=2 00:28:44.931 01:36:35 -- target/dif.sh@115 -- # iodepth=8 00:28:44.931 01:36:35 -- target/dif.sh@115 -- # runtime=5 00:28:44.931 01:36:35 -- target/dif.sh@115 -- # files=1 00:28:44.931 01:36:35 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:44.931 01:36:35 -- target/dif.sh@28 -- # local sub 00:28:44.931 01:36:35 -- target/dif.sh@30 -- # for sub in "$@" 00:28:44.931 01:36:35 -- target/dif.sh@31 -- # create_subsystem 0 00:28:44.931 01:36:35 -- target/dif.sh@18 -- # local sub_id=0 00:28:44.931 01:36:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:44.931 01:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:35 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 bdev_null0 00:28:44.931 01:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:44.931 01:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:35 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:44.931 01:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:35 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:44.931 01:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:35 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 [2024-07-27 01:36:35.034949] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.931 01:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:35 -- target/dif.sh@30 -- # for sub in "$@" 00:28:44.931 01:36:35 -- target/dif.sh@31 -- # create_subsystem 1 00:28:44.931 01:36:35 -- target/dif.sh@18 -- # local sub_id=1 00:28:44.931 01:36:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:44.931 01:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:35 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 bdev_null1 00:28:44.931 01:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:44.931 01:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:35 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:44.931 01:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:35 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.931 01:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.931 01:36:35 -- common/autotest_common.sh@10 -- # set +x 00:28:44.931 01:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.931 01:36:35 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:44.931 01:36:35 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:44.931 01:36:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:44.931 01:36:35 -- nvmf/common.sh@520 -- # config=() 00:28:44.931 01:36:35 -- nvmf/common.sh@520 -- # local subsystem config 00:28:44.931 01:36:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:44.931 01:36:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:44.931 01:36:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:44.931 { 00:28:44.931 "params": { 00:28:44.931 "name": "Nvme$subsystem", 00:28:44.931 "trtype": "$TEST_TRANSPORT", 00:28:44.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.931 "adrfam": "ipv4", 00:28:44.931 "trsvcid": "$NVMF_PORT", 00:28:44.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.931 "hdgst": ${hdgst:-false}, 00:28:44.931 "ddgst": ${ddgst:-false} 00:28:44.931 }, 00:28:44.931 "method": "bdev_nvme_attach_controller" 00:28:44.931 } 00:28:44.931 EOF 00:28:44.931 )") 00:28:44.931 01:36:35 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:44.931 01:36:35 -- target/dif.sh@82 -- # gen_fio_conf 00:28:44.931 01:36:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:44.931 01:36:35 -- target/dif.sh@54 -- # local file 00:28:44.931 01:36:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:44.932 01:36:35 -- target/dif.sh@56 -- # cat 00:28:44.932 01:36:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:44.932 01:36:35 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:44.932 01:36:35 -- common/autotest_common.sh@1320 -- # shift 00:28:44.932 01:36:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:44.932 01:36:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:44.932 01:36:35 -- nvmf/common.sh@542 -- # cat 00:28:44.932 01:36:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:44.932 01:36:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:44.932 01:36:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:44.932 01:36:35 -- target/dif.sh@72 -- # (( file <= files )) 00:28:44.932 01:36:35 -- target/dif.sh@73 -- # cat 00:28:44.932 01:36:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:44.932 01:36:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:44.932 01:36:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:44.932 { 00:28:44.932 "params": { 00:28:44.932 "name": "Nvme$subsystem", 00:28:44.932 "trtype": "$TEST_TRANSPORT", 00:28:44.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.932 "adrfam": "ipv4", 00:28:44.932 "trsvcid": "$NVMF_PORT", 00:28:44.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.932 "hdgst": ${hdgst:-false}, 00:28:44.932 "ddgst": ${ddgst:-false} 00:28:44.932 }, 00:28:44.932 "method": "bdev_nvme_attach_controller" 00:28:44.932 } 00:28:44.932 EOF 00:28:44.932 )") 00:28:44.932 01:36:35 -- nvmf/common.sh@542 -- # cat 00:28:44.932 01:36:35 -- target/dif.sh@72 -- # (( file++ )) 00:28:44.932 01:36:35 -- target/dif.sh@72 -- # (( file <= files )) 00:28:44.932 01:36:35 -- nvmf/common.sh@544 -- # jq . 00:28:44.932 01:36:35 -- nvmf/common.sh@545 -- # IFS=, 00:28:44.932 01:36:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:44.932 "params": { 00:28:44.932 "name": "Nvme0", 00:28:44.932 "trtype": "tcp", 00:28:44.932 "traddr": "10.0.0.2", 00:28:44.932 "adrfam": "ipv4", 00:28:44.932 "trsvcid": "4420", 00:28:44.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:44.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:44.932 "hdgst": false, 00:28:44.932 "ddgst": false 00:28:44.932 }, 00:28:44.932 "method": "bdev_nvme_attach_controller" 00:28:44.932 },{ 00:28:44.932 "params": { 00:28:44.932 "name": "Nvme1", 00:28:44.932 "trtype": "tcp", 00:28:44.932 "traddr": "10.0.0.2", 00:28:44.932 "adrfam": "ipv4", 00:28:44.932 "trsvcid": "4420", 00:28:44.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.932 "hdgst": false, 00:28:44.932 "ddgst": false 00:28:44.932 }, 00:28:44.932 "method": "bdev_nvme_attach_controller" 00:28:44.932 }' 00:28:44.932 01:36:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:44.932 01:36:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:44.932 01:36:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:44.932 01:36:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:44.932 01:36:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:44.932 01:36:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:44.932 01:36:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:44.932 01:36:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:44.932 01:36:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:44.932 01:36:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:44.932 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:44.932 ... 00:28:44.932 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:44.932 ... 00:28:44.932 fio-3.35 00:28:44.932 Starting 4 threads 00:28:44.932 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.932 [2024-07-27 01:36:35.906054] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:44.932 [2024-07-27 01:36:35.906174] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:50.199 00:28:50.199 filename0: (groupid=0, jobs=1): err= 0: pid=770051: Sat Jul 27 01:36:41 2024 00:28:50.199 read: IOPS=1953, BW=15.3MiB/s (16.0MB/s)(76.3MiB/5002msec) 00:28:50.199 slat (nsec): min=5016, max=42459, avg=10664.03, stdev=3873.11 00:28:50.199 clat (usec): min=1709, max=7315, avg=4062.60, stdev=572.78 00:28:50.199 lat (usec): min=1732, max=7328, avg=4073.26, stdev=572.80 00:28:50.199 clat percentiles (usec): 00:28:50.199 | 1.00th=[ 2769], 5.00th=[ 3294], 10.00th=[ 3523], 20.00th=[ 3720], 00:28:50.199 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 4015], 60.00th=[ 4113], 00:28:50.199 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5211], 00:28:50.199 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 6915], 99.95th=[ 7046], 00:28:50.199 | 99.99th=[ 7308] 00:28:50.199 bw ( KiB/s): min=14784, max=16576, per=25.21%, avg=15505.78, stdev=627.86, samples=9 00:28:50.199 iops : min= 1848, max= 2072, avg=1938.22, stdev=78.48, samples=9 00:28:50.199 lat (msec) : 2=0.03%, 4=49.88%, 10=50.09% 00:28:50.199 cpu : usr=92.58%, sys=6.90%, ctx=16, majf=0, minf=116 00:28:50.199 IO depths : 1=0.2%, 2=2.0%, 4=70.5%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.199 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.199 issued rwts: total=9769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.199 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:50.199 filename0: (groupid=0, jobs=1): err= 0: pid=770052: Sat Jul 27 01:36:41 2024 00:28:50.199 read: IOPS=1864, BW=14.6MiB/s (15.3MB/s)(72.8MiB/5001msec) 00:28:50.199 slat (nsec): min=4476, max=44724, avg=10925.86, stdev=3969.05 00:28:50.199 clat (usec): min=1897, max=46848, avg=4259.16, stdev=1397.75 00:28:50.199 lat (usec): min=1910, max=46862, avg=4270.08, stdev=1397.52 00:28:50.199 clat percentiles (usec): 00:28:50.199 | 1.00th=[ 3032], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3818], 00:28:50.199 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4178], 00:28:50.199 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 5014], 95.00th=[ 5735], 00:28:50.199 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6980], 99.95th=[46924], 00:28:50.199 | 99.99th=[46924] 00:28:50.199 bw ( KiB/s): min=13568, max=15888, per=24.40%, avg=15008.00, stdev=732.86, samples=9 00:28:50.199 iops : min= 1696, max= 1986, avg=1876.00, stdev=91.61, samples=9 00:28:50.199 lat (msec) : 2=0.01%, 4=39.58%, 10=60.32%, 50=0.09% 00:28:50.199 cpu : usr=91.56%, sys=7.94%, ctx=8, majf=0, minf=106 00:28:50.199 IO depths : 1=0.1%, 2=2.1%, 4=68.4%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.199 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.199 issued rwts: total=9323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.199 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:50.199 filename1: (groupid=0, jobs=1): err= 0: pid=770053: Sat Jul 27 01:36:41 2024 00:28:50.199 read: IOPS=1933, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5003msec) 00:28:50.199 slat (nsec): min=4756, max=37000, avg=10533.18, stdev=3755.88 00:28:50.199 clat (usec): min=1007, max=7695, avg=4103.42, stdev=664.56 00:28:50.199 lat (usec): min=1019, max=7703, avg=4113.95, stdev=664.52 00:28:50.199 clat percentiles (usec): 00:28:50.199 | 1.00th=[ 2900], 5.00th=[ 3294], 10.00th=[ 3490], 20.00th=[ 3687], 00:28:50.199 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 4015], 60.00th=[ 4080], 00:28:50.199 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 5014], 95.00th=[ 5800], 00:28:50.199 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 6718], 99.95th=[ 6849], 00:28:50.199 | 99.99th=[ 7701] 00:28:50.199 bw ( KiB/s): min=14560, max=15984, per=25.17%, avg=15478.20, stdev=405.66, samples=10 00:28:50.199 iops : min= 1820, max= 1998, avg=1934.70, stdev=50.75, samples=10 00:28:50.199 lat (msec) : 2=0.01%, 4=49.97%, 10=50.02% 00:28:50.199 cpu : usr=92.40%, sys=7.10%, ctx=7, majf=0, minf=90 00:28:50.199 IO depths : 1=0.2%, 2=2.3%, 4=69.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.199 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.199 issued rwts: total=9675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.199 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:50.199 filename1: (groupid=0, jobs=1): err= 0: pid=770054: Sat Jul 27 01:36:41 2024 00:28:50.199 read: IOPS=1938, BW=15.1MiB/s (15.9MB/s)(75.7MiB/5001msec) 00:28:50.199 slat (nsec): min=4434, max=43553, avg=10743.66, stdev=4089.32 00:28:50.199 clat (usec): min=942, max=7081, avg=4094.15, stdev=593.28 00:28:50.200 lat (usec): min=951, max=7095, avg=4104.90, stdev=593.01 00:28:50.200 clat percentiles (usec): 00:28:50.200 | 1.00th=[ 2900], 5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3720], 00:28:50.200 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4113], 00:28:50.200 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5342], 00:28:50.200 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6915], 99.95th=[ 6980], 00:28:50.200 | 99.99th=[ 7111] 00:28:50.200 bw ( KiB/s): min=14416, max=16256, per=25.00%, avg=15374.00, stdev=518.59, samples=9 00:28:50.200 iops : min= 1802, max= 2032, avg=1921.67, stdev=64.89, samples=9 00:28:50.200 lat (usec) : 1000=0.02% 00:28:50.200 lat (msec) : 2=0.04%, 4=48.45%, 10=51.49% 00:28:50.200 cpu : usr=92.16%, sys=7.36%, ctx=12, majf=0, minf=117 00:28:50.200 IO depths : 1=0.2%, 2=1.9%, 4=70.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.200 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.200 issued rwts: total=9693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.200 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:50.200 00:28:50.200 Run status group 0 (all jobs): 00:28:50.200 READ: bw=60.1MiB/s (63.0MB/s), 14.6MiB/s-15.3MiB/s (15.3MB/s-16.0MB/s), io=300MiB (315MB), run=5001-5003msec 00:28:50.200 01:36:41 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:50.200 01:36:41 -- target/dif.sh@43 -- # local sub 00:28:50.200 01:36:41 -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.200 01:36:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:50.200 01:36:41 -- target/dif.sh@36 -- # local sub_id=0 00:28:50.200 01:36:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:50.200 01:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 01:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.200 01:36:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:50.200 01:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 01:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.200 01:36:41 -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.200 01:36:41 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:50.200 01:36:41 -- target/dif.sh@36 -- # local sub_id=1 00:28:50.200 01:36:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.200 01:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 01:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.200 01:36:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:50.200 01:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 01:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.200 00:28:50.200 real 0m24.138s 00:28:50.200 user 4m28.709s 00:28:50.200 sys 0m8.364s 00:28:50.200 01:36:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 ************************************ 00:28:50.200 END TEST fio_dif_rand_params 00:28:50.200 ************************************ 00:28:50.200 01:36:41 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:50.200 01:36:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:50.200 01:36:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 ************************************ 00:28:50.200 START TEST fio_dif_digest 00:28:50.200 ************************************ 00:28:50.200 01:36:41 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:28:50.200 01:36:41 -- target/dif.sh@123 -- # local NULL_DIF 00:28:50.200 01:36:41 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:50.200 01:36:41 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:50.200 01:36:41 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:50.200 01:36:41 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:50.200 01:36:41 -- target/dif.sh@127 -- # numjobs=3 00:28:50.200 01:36:41 -- target/dif.sh@127 -- # iodepth=3 00:28:50.200 01:36:41 -- target/dif.sh@127 -- # runtime=10 00:28:50.200 01:36:41 -- target/dif.sh@128 -- # hdgst=true 00:28:50.200 01:36:41 -- target/dif.sh@128 -- # ddgst=true 00:28:50.200 01:36:41 -- target/dif.sh@130 -- # create_subsystems 0 00:28:50.200 01:36:41 -- target/dif.sh@28 -- # local sub 00:28:50.200 01:36:41 -- target/dif.sh@30 -- # for sub in "$@" 00:28:50.200 01:36:41 -- target/dif.sh@31 -- # create_subsystem 0 00:28:50.200 01:36:41 -- target/dif.sh@18 -- # local sub_id=0 00:28:50.200 01:36:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:50.200 01:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 bdev_null0 00:28:50.200 01:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.200 01:36:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:50.200 01:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 01:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.200 01:36:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:50.200 01:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 01:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.200 01:36:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.200 01:36:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.200 01:36:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.200 [2024-07-27 01:36:41.441161] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.200 01:36:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.200 01:36:41 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:50.200 01:36:41 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:50.200 01:36:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:50.200 01:36:41 -- nvmf/common.sh@520 -- # config=() 00:28:50.200 01:36:41 -- nvmf/common.sh@520 -- # local subsystem config 00:28:50.200 01:36:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:50.200 01:36:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.200 01:36:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:50.200 { 00:28:50.200 "params": { 00:28:50.200 "name": "Nvme$subsystem", 00:28:50.200 "trtype": "$TEST_TRANSPORT", 00:28:50.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.200 "adrfam": "ipv4", 00:28:50.200 "trsvcid": "$NVMF_PORT", 00:28:50.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.200 "hdgst": ${hdgst:-false}, 00:28:50.200 "ddgst": ${ddgst:-false} 00:28:50.200 }, 00:28:50.200 "method": "bdev_nvme_attach_controller" 00:28:50.200 } 00:28:50.200 EOF 00:28:50.200 )") 00:28:50.200 01:36:41 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.200 01:36:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:50.200 01:36:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:50.200 01:36:41 -- target/dif.sh@82 -- # gen_fio_conf 00:28:50.200 01:36:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:50.200 01:36:41 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.200 01:36:41 -- target/dif.sh@54 -- # local file 00:28:50.200 01:36:41 -- common/autotest_common.sh@1320 -- # shift 00:28:50.200 01:36:41 -- target/dif.sh@56 -- # cat 00:28:50.200 01:36:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:50.200 01:36:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.200 01:36:41 -- nvmf/common.sh@542 -- # cat 00:28:50.200 01:36:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.200 01:36:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:50.200 01:36:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:50.200 01:36:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:50.200 01:36:41 -- target/dif.sh@72 -- # (( file <= files )) 00:28:50.200 01:36:41 -- nvmf/common.sh@544 -- # jq . 00:28:50.200 01:36:41 -- nvmf/common.sh@545 -- # IFS=, 00:28:50.200 01:36:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:50.200 "params": { 00:28:50.200 "name": "Nvme0", 00:28:50.200 "trtype": "tcp", 00:28:50.200 "traddr": "10.0.0.2", 00:28:50.200 "adrfam": "ipv4", 00:28:50.200 "trsvcid": "4420", 00:28:50.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.200 "hdgst": true, 00:28:50.200 "ddgst": true 00:28:50.200 }, 00:28:50.200 "method": "bdev_nvme_attach_controller" 00:28:50.200 }' 00:28:50.200 01:36:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:50.200 01:36:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:50.200 01:36:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.200 01:36:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.200 01:36:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:50.200 01:36:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:50.200 01:36:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:50.200 01:36:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:50.200 01:36:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:50.200 01:36:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.200 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:50.200 ... 00:28:50.200 fio-3.35 00:28:50.200 Starting 3 threads 00:28:50.200 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.457 [2024-07-27 01:36:42.076450] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:50.457 [2024-07-27 01:36:42.076517] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:29:02.660 00:29:02.660 filename0: (groupid=0, jobs=1): err= 0: pid=770945: Sat Jul 27 01:36:52 2024 00:29:02.660 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(259MiB/10043msec) 00:29:02.660 slat (nsec): min=4629, max=35683, avg=13213.90, stdev=3401.76 00:29:02.660 clat (usec): min=8641, max=50694, avg=14490.19, stdev=1822.65 00:29:02.660 lat (usec): min=8653, max=50706, avg=14503.40, stdev=1822.67 00:29:02.660 clat percentiles (usec): 00:29:02.660 | 1.00th=[10290], 5.00th=[11731], 10.00th=[12780], 20.00th=[13435], 00:29:02.660 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14615], 60.00th=[14877], 00:29:02.660 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:29:02.660 | 99.00th=[17695], 99.50th=[18220], 99.90th=[21103], 99.95th=[48497], 00:29:02.660 | 99.99th=[50594] 00:29:02.660 bw ( KiB/s): min=25344, max=28672, per=34.89%, avg=26521.60, stdev=892.24, samples=20 00:29:02.660 iops : min= 198, max= 224, avg=207.20, stdev= 6.97, samples=20 00:29:02.660 lat (msec) : 10=0.82%, 20=98.94%, 50=0.19%, 100=0.05% 00:29:02.660 cpu : usr=89.76%, sys=9.76%, ctx=17, majf=0, minf=201 00:29:02.660 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.660 issued rwts: total=2074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.660 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:02.660 filename0: (groupid=0, jobs=1): err= 0: pid=770946: Sat Jul 27 01:36:52 2024 00:29:02.660 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(247MiB/10012msec) 00:29:02.660 slat (nsec): min=4527, max=36433, avg=13450.38, stdev=3572.14 00:29:02.660 clat (usec): min=8884, max=58873, avg=15159.35, stdev=3541.54 00:29:02.660 lat (usec): min=8911, max=58885, avg=15172.80, stdev=3541.58 00:29:02.660 clat percentiles (usec): 00:29:02.660 | 1.00th=[10421], 5.00th=[12649], 10.00th=[13304], 20.00th=[13960], 00:29:02.660 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:29:02.660 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:29:02.660 | 99.00th=[18744], 99.50th=[55837], 99.90th=[58459], 99.95th=[58983], 00:29:02.660 | 99.99th=[58983] 00:29:02.660 bw ( KiB/s): min=23040, max=27392, per=33.28%, avg=25295.20, stdev=1267.56, samples=20 00:29:02.660 iops : min= 180, max= 214, avg=197.60, stdev= 9.92, samples=20 00:29:02.660 lat (msec) : 10=0.56%, 20=98.69%, 50=0.20%, 100=0.56% 00:29:02.660 cpu : usr=90.65%, sys=8.85%, ctx=27, majf=0, minf=110 00:29:02.660 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.660 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.660 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:02.660 filename0: (groupid=0, jobs=1): err= 0: pid=770947: Sat Jul 27 01:36:52 2024 00:29:02.660 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(239MiB/10044msec) 00:29:02.660 slat (nsec): min=4603, max=35808, avg=13023.23, stdev=2964.61 00:29:02.660 clat (usec): min=9500, max=59060, avg=15720.70, stdev=4403.26 00:29:02.660 lat (usec): min=9513, max=59073, avg=15733.72, stdev=4403.24 00:29:02.660 clat percentiles (usec): 00:29:02.660 | 1.00th=[10814], 5.00th=[13042], 10.00th=[13829], 20.00th=[14222], 00:29:02.660 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15664], 00:29:02.660 | 70.00th=[16057], 80.00th=[16450], 90.00th=[17171], 95.00th=[17695], 00:29:02.660 | 99.00th=[46924], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:29:02.660 | 99.99th=[58983] 00:29:02.660 bw ( KiB/s): min=22272, max=26624, per=32.16%, avg=24450.30, stdev=1150.00, samples=20 00:29:02.660 iops : min= 174, max= 208, avg=191.00, stdev= 9.00, samples=20 00:29:02.660 lat (msec) : 10=0.26%, 20=98.59%, 50=0.21%, 100=0.94% 00:29:02.660 cpu : usr=90.95%, sys=8.55%, ctx=38, majf=0, minf=122 00:29:02.660 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.660 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.660 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:02.660 00:29:02.660 Run status group 0 (all jobs): 00:29:02.660 READ: bw=74.2MiB/s (77.8MB/s), 23.8MiB/s-25.8MiB/s (25.0MB/s-27.1MB/s), io=746MiB (782MB), run=10012-10044msec 00:29:02.660 01:36:52 -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:02.660 01:36:52 -- target/dif.sh@43 -- # local sub 00:29:02.660 01:36:52 -- target/dif.sh@45 -- # for sub in "$@" 00:29:02.660 01:36:52 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:02.660 01:36:52 -- target/dif.sh@36 -- # local sub_id=0 00:29:02.660 01:36:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:02.660 01:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:02.660 01:36:52 -- common/autotest_common.sh@10 -- # set +x 00:29:02.660 01:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:02.660 01:36:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:02.660 01:36:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:02.660 01:36:52 -- common/autotest_common.sh@10 -- # set +x 00:29:02.660 01:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:02.660 00:29:02.660 real 0m11.163s 00:29:02.660 user 0m28.410s 00:29:02.660 sys 0m3.012s 00:29:02.660 01:36:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:02.660 01:36:52 -- common/autotest_common.sh@10 -- # set +x 00:29:02.660 ************************************ 00:29:02.660 END TEST fio_dif_digest 00:29:02.660 ************************************ 00:29:02.660 01:36:52 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:02.660 01:36:52 -- target/dif.sh@147 -- # nvmftestfini 00:29:02.660 01:36:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:02.660 01:36:52 -- nvmf/common.sh@116 -- # sync 00:29:02.660 01:36:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:02.660 01:36:52 -- nvmf/common.sh@119 -- # set +e 00:29:02.660 01:36:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:02.660 01:36:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:02.660 rmmod nvme_tcp 00:29:02.660 rmmod nvme_fabrics 00:29:02.660 rmmod nvme_keyring 00:29:02.660 01:36:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:02.660 01:36:52 -- nvmf/common.sh@123 -- # set -e 00:29:02.660 01:36:52 -- nvmf/common.sh@124 -- # return 0 00:29:02.660 01:36:52 -- nvmf/common.sh@477 -- # '[' -n 764113 ']' 00:29:02.660 01:36:52 -- nvmf/common.sh@478 -- # killprocess 764113 00:29:02.660 01:36:52 -- common/autotest_common.sh@926 -- # '[' -z 764113 ']' 00:29:02.660 01:36:52 -- common/autotest_common.sh@930 -- # kill -0 764113 00:29:02.660 01:36:52 -- common/autotest_common.sh@931 -- # uname 00:29:02.660 01:36:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:02.660 01:36:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 764113 00:29:02.660 01:36:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:02.660 01:36:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:02.660 01:36:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 764113' 00:29:02.660 killing process with pid 764113 00:29:02.660 01:36:52 -- common/autotest_common.sh@945 -- # kill 764113 00:29:02.660 01:36:52 -- common/autotest_common.sh@950 -- # wait 764113 00:29:02.660 01:36:52 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:29:02.660 01:36:52 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:02.660 Waiting for block devices as requested 00:29:02.660 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:02.660 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:02.918 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:02.918 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:02.918 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:02.918 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:03.176 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:03.176 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:03.176 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:03.176 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:03.434 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:03.434 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:03.434 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:03.434 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:03.692 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:03.692 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:03.692 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:03.950 01:36:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:03.950 01:36:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:03.950 01:36:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:03.950 01:36:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:03.950 01:36:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.950 01:36:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:03.950 01:36:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.848 01:36:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:05.848 00:29:05.848 real 1m6.962s 00:29:05.848 user 6m24.642s 00:29:05.848 sys 0m20.679s 00:29:05.848 01:36:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.848 01:36:57 -- common/autotest_common.sh@10 -- # set +x 00:29:05.848 ************************************ 00:29:05.848 END TEST nvmf_dif 00:29:05.848 ************************************ 00:29:05.848 01:36:57 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:05.848 01:36:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.848 01:36:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.848 01:36:57 -- common/autotest_common.sh@10 -- # set +x 00:29:05.848 ************************************ 00:29:05.848 START TEST nvmf_abort_qd_sizes 00:29:05.848 ************************************ 00:29:05.848 01:36:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:06.106 * Looking for test storage... 00:29:06.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:06.106 01:36:57 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.106 01:36:57 -- nvmf/common.sh@7 -- # uname -s 00:29:06.106 01:36:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.106 01:36:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.106 01:36:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.106 01:36:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.106 01:36:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.106 01:36:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.106 01:36:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.106 01:36:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.106 01:36:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.106 01:36:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.106 01:36:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:06.106 01:36:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:06.106 01:36:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.106 01:36:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.106 01:36:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.106 01:36:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.106 01:36:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.107 01:36:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.107 01:36:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.107 01:36:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.107 01:36:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.107 01:36:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.107 01:36:57 -- paths/export.sh@5 -- # export PATH 00:29:06.107 01:36:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.107 01:36:57 -- nvmf/common.sh@46 -- # : 0 00:29:06.107 01:36:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:06.107 01:36:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:06.107 01:36:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:06.107 01:36:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.107 01:36:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.107 01:36:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:06.107 01:36:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:06.107 01:36:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:06.107 01:36:57 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:29:06.107 01:36:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:06.107 01:36:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.107 01:36:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:06.107 01:36:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:06.107 01:36:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:06.107 01:36:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.107 01:36:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:06.107 01:36:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.107 01:36:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:06.107 01:36:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:06.107 01:36:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:06.107 01:36:57 -- common/autotest_common.sh@10 -- # set +x 00:29:08.008 01:36:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:08.008 01:36:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:08.008 01:36:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:08.008 01:36:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:08.008 01:36:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:08.008 01:36:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:08.008 01:36:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:08.008 01:36:59 -- nvmf/common.sh@294 -- # net_devs=() 00:29:08.008 01:36:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:08.008 01:36:59 -- nvmf/common.sh@295 -- # e810=() 00:29:08.008 01:36:59 -- nvmf/common.sh@295 -- # local -ga e810 00:29:08.008 01:36:59 -- nvmf/common.sh@296 -- # x722=() 00:29:08.008 01:36:59 -- nvmf/common.sh@296 -- # local -ga x722 00:29:08.008 01:36:59 -- nvmf/common.sh@297 -- # mlx=() 00:29:08.008 01:36:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:08.008 01:36:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.008 01:36:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:08.008 01:36:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:08.008 01:36:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:08.008 01:36:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:08.008 01:36:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:08.008 01:36:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:08.008 01:36:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:08.009 01:36:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:08.009 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:08.009 01:36:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:08.009 01:36:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:08.009 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:08.009 01:36:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:08.009 01:36:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:08.009 01:36:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.009 01:36:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:08.009 01:36:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.009 01:36:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:08.009 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:08.009 01:36:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.009 01:36:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:08.009 01:36:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.009 01:36:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:08.009 01:36:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.009 01:36:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:08.009 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:08.009 01:36:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.009 01:36:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:08.009 01:36:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:08.009 01:36:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:08.009 01:36:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:08.009 01:36:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.009 01:36:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.009 01:36:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.009 01:36:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:08.009 01:36:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.009 01:36:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.009 01:36:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:08.009 01:36:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.009 01:36:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.009 01:36:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:08.009 01:36:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:08.009 01:36:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.009 01:36:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.009 01:36:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.009 01:36:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.009 01:36:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:08.009 01:36:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.009 01:36:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.009 01:36:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.009 01:36:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:08.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:29:08.009 00:29:08.009 --- 10.0.0.2 ping statistics --- 00:29:08.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.009 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:29:08.009 01:36:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:29:08.009 00:29:08.009 --- 10.0.0.1 ping statistics --- 00:29:08.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.009 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:08.009 01:36:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.009 01:36:59 -- nvmf/common.sh@410 -- # return 0 00:29:08.009 01:36:59 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:29:08.009 01:36:59 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:09.386 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:09.386 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:09.386 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:09.386 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:09.386 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:09.386 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:09.386 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:09.386 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:09.386 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:09.386 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:09.386 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:09.386 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:09.386 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:09.386 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:09.386 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:09.386 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:10.323 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:29:10.323 01:37:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.323 01:37:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:10.323 01:37:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:10.323 01:37:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.323 01:37:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:10.323 01:37:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:10.323 01:37:02 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:29:10.323 01:37:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:10.323 01:37:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:10.323 01:37:02 -- common/autotest_common.sh@10 -- # set +x 00:29:10.323 01:37:02 -- nvmf/common.sh@469 -- # nvmfpid=775836 00:29:10.323 01:37:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:10.323 01:37:02 -- nvmf/common.sh@470 -- # waitforlisten 775836 00:29:10.323 01:37:02 -- common/autotest_common.sh@819 -- # '[' -z 775836 ']' 00:29:10.323 01:37:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.323 01:37:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:10.323 01:37:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.323 01:37:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:10.323 01:37:02 -- common/autotest_common.sh@10 -- # set +x 00:29:10.323 [2024-07-27 01:37:02.044178] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:10.323 [2024-07-27 01:37:02.044248] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.323 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.582 [2024-07-27 01:37:02.108260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.582 [2024-07-27 01:37:02.227295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:10.582 [2024-07-27 01:37:02.227452] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.582 [2024-07-27 01:37:02.227470] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.582 [2024-07-27 01:37:02.227487] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.582 [2024-07-27 01:37:02.227550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.582 [2024-07-27 01:37:02.227608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.582 [2024-07-27 01:37:02.227675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.582 [2024-07-27 01:37:02.227677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.514 01:37:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:11.514 01:37:02 -- common/autotest_common.sh@852 -- # return 0 00:29:11.514 01:37:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:11.514 01:37:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:11.514 01:37:02 -- common/autotest_common.sh@10 -- # set +x 00:29:11.514 01:37:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.514 01:37:02 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:11.514 01:37:02 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:29:11.514 01:37:02 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:29:11.514 01:37:03 -- scripts/common.sh@311 -- # local bdf bdfs 00:29:11.514 01:37:03 -- scripts/common.sh@312 -- # local nvmes 00:29:11.514 01:37:03 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:29:11.514 01:37:03 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:11.514 01:37:03 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:29:11.514 01:37:03 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:29:11.514 01:37:03 -- scripts/common.sh@322 -- # uname -s 00:29:11.514 01:37:03 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:29:11.514 01:37:03 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:29:11.514 01:37:03 -- scripts/common.sh@327 -- # (( 1 )) 00:29:11.514 01:37:03 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:29:11.514 01:37:03 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:29:11.514 01:37:03 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:29:11.514 01:37:03 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:29:11.514 01:37:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:11.514 01:37:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:11.514 01:37:03 -- common/autotest_common.sh@10 -- # set +x 00:29:11.514 ************************************ 00:29:11.514 START TEST spdk_target_abort 00:29:11.514 ************************************ 00:29:11.514 01:37:03 -- common/autotest_common.sh@1104 -- # spdk_target 00:29:11.514 01:37:03 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:11.514 01:37:03 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:29:11.514 01:37:03 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:29:11.514 01:37:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:11.514 01:37:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.817 spdk_targetn1 00:29:14.817 01:37:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:14.817 01:37:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.817 01:37:05 -- common/autotest_common.sh@10 -- # set +x 00:29:14.817 [2024-07-27 01:37:05.851331] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.817 01:37:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:29:14.817 01:37:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.817 01:37:05 -- common/autotest_common.sh@10 -- # set +x 00:29:14.817 01:37:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:29:14.817 01:37:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.817 01:37:05 -- common/autotest_common.sh@10 -- # set +x 00:29:14.817 01:37:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:29:14.817 01:37:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.817 01:37:05 -- common/autotest_common.sh@10 -- # set +x 00:29:14.817 [2024-07-27 01:37:05.883637] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.817 01:37:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:14.817 01:37:05 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:14.817 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.340 Initializing NVMe Controllers 00:29:17.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:17.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:17.340 Initialization complete. Launching workers. 00:29:17.340 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10036, failed: 0 00:29:17.340 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1327, failed to submit 8709 00:29:17.340 success 805, unsuccess 522, failed 0 00:29:17.340 01:37:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:17.340 01:37:09 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:17.597 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.884 Initializing NVMe Controllers 00:29:20.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:20.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:20.884 Initialization complete. Launching workers. 00:29:20.884 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8664, failed: 0 00:29:20.884 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1241, failed to submit 7423 00:29:20.884 success 332, unsuccess 909, failed 0 00:29:20.884 01:37:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:20.884 01:37:12 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:20.884 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.167 Initializing NVMe Controllers 00:29:24.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:24.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:24.168 Initialization complete. Launching workers. 00:29:24.168 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31956, failed: 0 00:29:24.168 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2720, failed to submit 29236 00:29:24.168 success 553, unsuccess 2167, failed 0 00:29:24.168 01:37:15 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:29:24.168 01:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.168 01:37:15 -- common/autotest_common.sh@10 -- # set +x 00:29:24.168 01:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.168 01:37:15 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:24.168 01:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.168 01:37:15 -- common/autotest_common.sh@10 -- # set +x 00:29:25.545 01:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.545 01:37:16 -- target/abort_qd_sizes.sh@62 -- # killprocess 775836 00:29:25.545 01:37:16 -- common/autotest_common.sh@926 -- # '[' -z 775836 ']' 00:29:25.545 01:37:16 -- common/autotest_common.sh@930 -- # kill -0 775836 00:29:25.545 01:37:16 -- common/autotest_common.sh@931 -- # uname 00:29:25.545 01:37:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.545 01:37:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 775836 00:29:25.545 01:37:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:25.545 01:37:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:25.545 01:37:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 775836' 00:29:25.545 killing process with pid 775836 00:29:25.545 01:37:17 -- common/autotest_common.sh@945 -- # kill 775836 00:29:25.545 01:37:17 -- common/autotest_common.sh@950 -- # wait 775836 00:29:25.545 00:29:25.545 real 0m14.260s 00:29:25.545 user 0m56.347s 00:29:25.545 sys 0m2.629s 00:29:25.545 01:37:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.545 01:37:17 -- common/autotest_common.sh@10 -- # set +x 00:29:25.545 ************************************ 00:29:25.545 END TEST spdk_target_abort 00:29:25.545 ************************************ 00:29:25.545 01:37:17 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:29:25.545 01:37:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:25.545 01:37:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.545 01:37:17 -- common/autotest_common.sh@10 -- # set +x 00:29:25.545 ************************************ 00:29:25.545 START TEST kernel_target_abort 00:29:25.545 ************************************ 00:29:25.545 01:37:17 -- common/autotest_common.sh@1104 -- # kernel_target 00:29:25.545 01:37:17 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:29:25.545 01:37:17 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:29:25.545 01:37:17 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:29:25.545 01:37:17 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:29:25.545 01:37:17 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:29:25.545 01:37:17 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:25.545 01:37:17 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:25.545 01:37:17 -- nvmf/common.sh@627 -- # local block nvme 00:29:25.545 01:37:17 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:29:25.545 01:37:17 -- nvmf/common.sh@630 -- # modprobe nvmet 00:29:25.803 01:37:17 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:25.803 01:37:17 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:26.738 Waiting for block devices as requested 00:29:26.738 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:26.998 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:26.998 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:27.259 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:27.259 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:27.259 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:27.259 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:27.519 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:27.519 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:27.519 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:27.519 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:27.519 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:27.778 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:27.778 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:27.778 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:28.037 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:28.037 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:28.037 01:37:19 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:29:28.037 01:37:19 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:28.037 01:37:19 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:29:28.037 01:37:19 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:29:28.037 01:37:19 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:28.296 No valid GPT data, bailing 00:29:28.296 01:37:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:28.296 01:37:19 -- scripts/common.sh@393 -- # pt= 00:29:28.296 01:37:19 -- scripts/common.sh@394 -- # return 1 00:29:28.296 01:37:19 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:29:28.296 01:37:19 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:29:28.296 01:37:19 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:28.296 01:37:19 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:28.296 01:37:19 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:28.296 01:37:19 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:29:28.296 01:37:19 -- nvmf/common.sh@654 -- # echo 1 00:29:28.296 01:37:19 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:29:28.296 01:37:19 -- nvmf/common.sh@656 -- # echo 1 00:29:28.296 01:37:19 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:29:28.296 01:37:19 -- nvmf/common.sh@663 -- # echo tcp 00:29:28.296 01:37:19 -- nvmf/common.sh@664 -- # echo 4420 00:29:28.296 01:37:19 -- nvmf/common.sh@665 -- # echo ipv4 00:29:28.296 01:37:19 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:28.296 01:37:19 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:28.296 00:29:28.296 Discovery Log Number of Records 2, Generation counter 2 00:29:28.296 =====Discovery Log Entry 0====== 00:29:28.296 trtype: tcp 00:29:28.296 adrfam: ipv4 00:29:28.296 subtype: current discovery subsystem 00:29:28.296 treq: not specified, sq flow control disable supported 00:29:28.296 portid: 1 00:29:28.296 trsvcid: 4420 00:29:28.296 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:28.296 traddr: 10.0.0.1 00:29:28.296 eflags: none 00:29:28.296 sectype: none 00:29:28.296 =====Discovery Log Entry 1====== 00:29:28.296 trtype: tcp 00:29:28.296 adrfam: ipv4 00:29:28.296 subtype: nvme subsystem 00:29:28.297 treq: not specified, sq flow control disable supported 00:29:28.297 portid: 1 00:29:28.297 trsvcid: 4420 00:29:28.297 subnqn: kernel_target 00:29:28.297 traddr: 10.0.0.1 00:29:28.297 eflags: none 00:29:28.297 sectype: none 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:28.297 01:37:19 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:28.297 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.584 Initializing NVMe Controllers 00:29:31.584 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:31.584 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:31.584 Initialization complete. Launching workers. 00:29:31.584 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 28524, failed: 0 00:29:31.584 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28524, failed to submit 0 00:29:31.584 success 0, unsuccess 28524, failed 0 00:29:31.584 01:37:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:31.584 01:37:23 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:31.584 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.873 Initializing NVMe Controllers 00:29:34.873 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:34.873 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:34.873 Initialization complete. Launching workers. 00:29:34.873 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 58250, failed: 0 00:29:34.873 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 14670, failed to submit 43580 00:29:34.873 success 0, unsuccess 14670, failed 0 00:29:34.873 01:37:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:34.873 01:37:26 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:34.873 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.190 Initializing NVMe Controllers 00:29:38.190 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:38.190 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:38.190 Initialization complete. Launching workers. 00:29:38.190 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 56676, failed: 0 00:29:38.190 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 14130, failed to submit 42546 00:29:38.190 success 0, unsuccess 14130, failed 0 00:29:38.190 01:37:29 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:29:38.190 01:37:29 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:29:38.190 01:37:29 -- nvmf/common.sh@677 -- # echo 0 00:29:38.190 01:37:29 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:29:38.190 01:37:29 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:38.190 01:37:29 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:38.190 01:37:29 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:38.190 01:37:29 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:29:38.190 01:37:29 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:29:38.190 00:29:38.190 real 0m12.012s 00:29:38.190 user 0m4.156s 00:29:38.190 sys 0m2.570s 00:29:38.190 01:37:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:38.190 01:37:29 -- common/autotest_common.sh@10 -- # set +x 00:29:38.190 ************************************ 00:29:38.190 END TEST kernel_target_abort 00:29:38.190 ************************************ 00:29:38.190 01:37:29 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:29:38.190 01:37:29 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:29:38.190 01:37:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:38.191 01:37:29 -- nvmf/common.sh@116 -- # sync 00:29:38.191 01:37:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:38.191 01:37:29 -- nvmf/common.sh@119 -- # set +e 00:29:38.191 01:37:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:38.191 01:37:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:38.191 rmmod nvme_tcp 00:29:38.191 rmmod nvme_fabrics 00:29:38.191 rmmod nvme_keyring 00:29:38.191 01:37:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:38.191 01:37:29 -- nvmf/common.sh@123 -- # set -e 00:29:38.191 01:37:29 -- nvmf/common.sh@124 -- # return 0 00:29:38.191 01:37:29 -- nvmf/common.sh@477 -- # '[' -n 775836 ']' 00:29:38.191 01:37:29 -- nvmf/common.sh@478 -- # killprocess 775836 00:29:38.191 01:37:29 -- common/autotest_common.sh@926 -- # '[' -z 775836 ']' 00:29:38.191 01:37:29 -- common/autotest_common.sh@930 -- # kill -0 775836 00:29:38.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (775836) - No such process 00:29:38.191 01:37:29 -- common/autotest_common.sh@953 -- # echo 'Process with pid 775836 is not found' 00:29:38.191 Process with pid 775836 is not found 00:29:38.191 01:37:29 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:29:38.191 01:37:29 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:38.757 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:38.757 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:38.757 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:38.757 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:38.757 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:38.757 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:39.016 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:39.016 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:39.016 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:39.016 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:39.016 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:39.016 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:39.016 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:39.016 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:39.016 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:39.016 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:39.016 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:39.016 01:37:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:39.016 01:37:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:39.016 01:37:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:39.016 01:37:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:39.016 01:37:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.016 01:37:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:39.016 01:37:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.562 01:37:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:41.562 00:29:41.562 real 0m35.211s 00:29:41.562 user 1m2.763s 00:29:41.562 sys 0m8.458s 00:29:41.562 01:37:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:41.562 01:37:32 -- common/autotest_common.sh@10 -- # set +x 00:29:41.562 ************************************ 00:29:41.562 END TEST nvmf_abort_qd_sizes 00:29:41.562 ************************************ 00:29:41.562 01:37:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:41.562 01:37:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:41.562 01:37:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:41.562 01:37:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:41.562 01:37:32 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:41.562 01:37:32 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:41.562 01:37:32 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:41.562 01:37:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:41.562 01:37:32 -- common/autotest_common.sh@10 -- # set +x 00:29:41.562 01:37:32 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:41.562 01:37:32 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:41.562 01:37:32 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:41.562 01:37:32 -- common/autotest_common.sh@10 -- # set +x 00:29:42.934 INFO: APP EXITING 00:29:42.934 INFO: killing all VMs 00:29:42.934 INFO: killing vhost app 00:29:42.934 INFO: EXIT DONE 00:29:44.307 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:44.307 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:44.307 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:44.307 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:44.307 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:44.307 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:44.307 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:44.307 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:44.307 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:44.307 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:44.307 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:44.307 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:44.307 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:44.307 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:44.307 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:44.307 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:44.307 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:45.684 Cleaning 00:29:45.684 Removing: /var/run/dpdk/spdk0/config 00:29:45.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:45.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:45.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:45.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:45.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:45.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:45.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:45.685 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:45.685 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:45.685 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:45.685 Removing: /var/run/dpdk/spdk1/config 00:29:45.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:45.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:45.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:45.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:45.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:45.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:45.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:45.685 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:45.685 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:45.685 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:45.685 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:45.685 Removing: /var/run/dpdk/spdk2/config 00:29:45.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:45.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:45.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:45.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:45.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:45.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:45.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:45.685 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:45.685 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:45.685 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:45.685 Removing: /var/run/dpdk/spdk3/config 00:29:45.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:45.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:45.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:45.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:45.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:45.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:45.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:45.685 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:45.685 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:45.685 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:45.685 Removing: /var/run/dpdk/spdk4/config 00:29:45.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:45.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:45.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:45.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:45.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:45.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:45.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:45.685 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:45.685 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:45.685 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:45.685 Removing: /dev/shm/bdev_svc_trace.1 00:29:45.685 Removing: /dev/shm/nvmf_trace.0 00:29:45.685 Removing: /dev/shm/spdk_tgt_trace.pid510169 00:29:45.685 Removing: /var/run/dpdk/spdk0 00:29:45.685 Removing: /var/run/dpdk/spdk1 00:29:45.685 Removing: /var/run/dpdk/spdk2 00:29:45.685 Removing: /var/run/dpdk/spdk3 00:29:45.685 Removing: /var/run/dpdk/spdk4 00:29:45.685 Removing: /var/run/dpdk/spdk_pid508471 00:29:45.685 Removing: /var/run/dpdk/spdk_pid509220 00:29:45.685 Removing: /var/run/dpdk/spdk_pid510169 00:29:45.685 Removing: /var/run/dpdk/spdk_pid510651 00:29:45.685 Removing: /var/run/dpdk/spdk_pid511989 00:29:45.685 Removing: /var/run/dpdk/spdk_pid513450 00:29:45.685 Removing: /var/run/dpdk/spdk_pid513757 00:29:45.685 Removing: /var/run/dpdk/spdk_pid514075 00:29:45.685 Removing: /var/run/dpdk/spdk_pid514311 00:29:45.685 Removing: /var/run/dpdk/spdk_pid514604 00:29:45.685 Removing: /var/run/dpdk/spdk_pid514761 00:29:45.685 Removing: /var/run/dpdk/spdk_pid514932 00:29:45.685 Removing: /var/run/dpdk/spdk_pid515207 00:29:45.685 Removing: /var/run/dpdk/spdk_pid515575 00:29:45.685 Removing: /var/run/dpdk/spdk_pid518083 00:29:45.685 Removing: /var/run/dpdk/spdk_pid518294 00:29:45.685 Removing: /var/run/dpdk/spdk_pid518587 00:29:45.685 Removing: /var/run/dpdk/spdk_pid518727 00:29:45.685 Removing: /var/run/dpdk/spdk_pid519049 00:29:45.685 Removing: /var/run/dpdk/spdk_pid519182 00:29:45.685 Removing: /var/run/dpdk/spdk_pid519508 00:29:45.685 Removing: /var/run/dpdk/spdk_pid519635 00:29:45.685 Removing: /var/run/dpdk/spdk_pid519934 00:29:45.685 Removing: /var/run/dpdk/spdk_pid520068 00:29:45.685 Removing: /var/run/dpdk/spdk_pid520242 00:29:45.685 Removing: /var/run/dpdk/spdk_pid520376 00:29:45.685 Removing: /var/run/dpdk/spdk_pid520760 00:29:45.685 Removing: /var/run/dpdk/spdk_pid520914 00:29:45.685 Removing: /var/run/dpdk/spdk_pid521128 00:29:45.685 Removing: /var/run/dpdk/spdk_pid521408 00:29:45.685 Removing: /var/run/dpdk/spdk_pid521437 00:29:45.685 Removing: /var/run/dpdk/spdk_pid521612 00:29:45.685 Removing: /var/run/dpdk/spdk_pid521766 00:29:45.685 Removing: /var/run/dpdk/spdk_pid521921 00:29:45.685 Removing: /var/run/dpdk/spdk_pid522178 00:29:45.685 Removing: /var/run/dpdk/spdk_pid522344 00:29:45.685 Removing: /var/run/dpdk/spdk_pid522487 00:29:45.685 Removing: /var/run/dpdk/spdk_pid522765 00:29:45.685 Removing: /var/run/dpdk/spdk_pid522912 00:29:45.685 Removing: /var/run/dpdk/spdk_pid523076 00:29:45.685 Removing: /var/run/dpdk/spdk_pid523324 00:29:45.685 Removing: /var/run/dpdk/spdk_pid523494 00:29:45.685 Removing: /var/run/dpdk/spdk_pid523643 00:29:45.685 Removing: /var/run/dpdk/spdk_pid523851 00:29:45.685 Removing: /var/run/dpdk/spdk_pid524062 00:29:45.685 Removing: /var/run/dpdk/spdk_pid524226 00:29:45.685 Removing: /var/run/dpdk/spdk_pid524371 00:29:45.685 Removing: /var/run/dpdk/spdk_pid524649 00:29:45.685 Removing: /var/run/dpdk/spdk_pid524793 00:29:45.685 Removing: /var/run/dpdk/spdk_pid524952 00:29:45.685 Removing: /var/run/dpdk/spdk_pid525217 00:29:45.685 Removing: /var/run/dpdk/spdk_pid525375 00:29:45.685 Removing: /var/run/dpdk/spdk_pid525524 00:29:45.685 Removing: /var/run/dpdk/spdk_pid525799 00:29:45.685 Removing: /var/run/dpdk/spdk_pid525945 00:29:45.685 Removing: /var/run/dpdk/spdk_pid526103 00:29:45.685 Removing: /var/run/dpdk/spdk_pid526323 00:29:45.685 Removing: /var/run/dpdk/spdk_pid526526 00:29:45.685 Removing: /var/run/dpdk/spdk_pid526672 00:29:45.685 Removing: /var/run/dpdk/spdk_pid526873 00:29:45.685 Removing: /var/run/dpdk/spdk_pid527101 00:29:45.685 Removing: /var/run/dpdk/spdk_pid527254 00:29:45.685 Removing: /var/run/dpdk/spdk_pid527448 00:29:45.685 Removing: /var/run/dpdk/spdk_pid527684 00:29:45.685 Removing: /var/run/dpdk/spdk_pid527833 00:29:45.685 Removing: /var/run/dpdk/spdk_pid528081 00:29:45.685 Removing: /var/run/dpdk/spdk_pid528267 00:29:45.685 Removing: /var/run/dpdk/spdk_pid528423 00:29:45.685 Removing: /var/run/dpdk/spdk_pid528631 00:29:45.685 Removing: /var/run/dpdk/spdk_pid528850 00:29:45.685 Removing: /var/run/dpdk/spdk_pid528997 00:29:45.685 Removing: /var/run/dpdk/spdk_pid529242 00:29:45.685 Removing: /var/run/dpdk/spdk_pid529346 00:29:45.685 Removing: /var/run/dpdk/spdk_pid529549 00:29:45.685 Removing: /var/run/dpdk/spdk_pid531739 00:29:45.685 Removing: /var/run/dpdk/spdk_pid587469 00:29:45.685 Removing: /var/run/dpdk/spdk_pid590123 00:29:45.685 Removing: /var/run/dpdk/spdk_pid597123 00:29:45.685 Removing: /var/run/dpdk/spdk_pid600623 00:29:45.944 Removing: /var/run/dpdk/spdk_pid603704 00:29:45.944 Removing: /var/run/dpdk/spdk_pid604118 00:29:45.944 Removing: /var/run/dpdk/spdk_pid609201 00:29:45.944 Removing: /var/run/dpdk/spdk_pid609482 00:29:45.944 Removing: /var/run/dpdk/spdk_pid612163 00:29:45.944 Removing: /var/run/dpdk/spdk_pid615927 00:29:45.944 Removing: /var/run/dpdk/spdk_pid618065 00:29:45.944 Removing: /var/run/dpdk/spdk_pid624819 00:29:45.944 Removing: /var/run/dpdk/spdk_pid630217 00:29:45.944 Removing: /var/run/dpdk/spdk_pid631572 00:29:45.944 Removing: /var/run/dpdk/spdk_pid632262 00:29:45.944 Removing: /var/run/dpdk/spdk_pid643363 00:29:45.944 Removing: /var/run/dpdk/spdk_pid645617 00:29:45.944 Removing: /var/run/dpdk/spdk_pid648442 00:29:45.944 Removing: /var/run/dpdk/spdk_pid649658 00:29:45.944 Removing: /var/run/dpdk/spdk_pid651029 00:29:45.944 Removing: /var/run/dpdk/spdk_pid651299 00:29:45.944 Removing: /var/run/dpdk/spdk_pid651447 00:29:45.944 Removing: /var/run/dpdk/spdk_pid651626 00:29:45.944 Removing: /var/run/dpdk/spdk_pid652197 00:29:45.944 Removing: /var/run/dpdk/spdk_pid653690 00:29:45.944 Removing: /var/run/dpdk/spdk_pid654580 00:29:45.944 Removing: /var/run/dpdk/spdk_pid655076 00:29:45.944 Removing: /var/run/dpdk/spdk_pid658655 00:29:45.944 Removing: /var/run/dpdk/spdk_pid662111 00:29:45.944 Removing: /var/run/dpdk/spdk_pid665746 00:29:45.944 Removing: /var/run/dpdk/spdk_pid690049 00:29:45.944 Removing: /var/run/dpdk/spdk_pid692836 00:29:45.944 Removing: /var/run/dpdk/spdk_pid696726 00:29:45.944 Removing: /var/run/dpdk/spdk_pid697830 00:29:45.944 Removing: /var/run/dpdk/spdk_pid699089 00:29:45.944 Removing: /var/run/dpdk/spdk_pid702283 00:29:45.944 Removing: /var/run/dpdk/spdk_pid704800 00:29:45.944 Removing: /var/run/dpdk/spdk_pid709062 00:29:45.944 Removing: /var/run/dpdk/spdk_pid709170 00:29:45.944 Removing: /var/run/dpdk/spdk_pid712009 00:29:45.944 Removing: /var/run/dpdk/spdk_pid712269 00:29:45.944 Removing: /var/run/dpdk/spdk_pid712412 00:29:45.944 Removing: /var/run/dpdk/spdk_pid712683 00:29:45.944 Removing: /var/run/dpdk/spdk_pid712700 00:29:45.944 Removing: /var/run/dpdk/spdk_pid713798 00:29:45.944 Removing: /var/run/dpdk/spdk_pid715136 00:29:45.944 Removing: /var/run/dpdk/spdk_pid716353 00:29:45.944 Removing: /var/run/dpdk/spdk_pid717570 00:29:45.944 Removing: /var/run/dpdk/spdk_pid718787 00:29:45.944 Removing: /var/run/dpdk/spdk_pid720013 00:29:45.944 Removing: /var/run/dpdk/spdk_pid723896 00:29:45.944 Removing: /var/run/dpdk/spdk_pid724358 00:29:45.944 Removing: /var/run/dpdk/spdk_pid725692 00:29:45.944 Removing: /var/run/dpdk/spdk_pid726447 00:29:45.944 Removing: /var/run/dpdk/spdk_pid730230 00:29:45.944 Removing: /var/run/dpdk/spdk_pid732384 00:29:45.944 Removing: /var/run/dpdk/spdk_pid736500 00:29:45.944 Removing: /var/run/dpdk/spdk_pid740235 00:29:45.944 Removing: /var/run/dpdk/spdk_pid743801 00:29:45.944 Removing: /var/run/dpdk/spdk_pid744225 00:29:45.944 Removing: /var/run/dpdk/spdk_pid744761 00:29:45.944 Removing: /var/run/dpdk/spdk_pid745183 00:29:45.944 Removing: /var/run/dpdk/spdk_pid745789 00:29:45.944 Removing: /var/run/dpdk/spdk_pid746337 00:29:45.944 Removing: /var/run/dpdk/spdk_pid746887 00:29:45.944 Removing: /var/run/dpdk/spdk_pid747444 00:29:45.944 Removing: /var/run/dpdk/spdk_pid750114 00:29:45.944 Removing: /var/run/dpdk/spdk_pid750261 00:29:45.944 Removing: /var/run/dpdk/spdk_pid754119 00:29:45.944 Removing: /var/run/dpdk/spdk_pid754295 00:29:45.944 Removing: /var/run/dpdk/spdk_pid756061 00:29:45.944 Removing: /var/run/dpdk/spdk_pid761220 00:29:45.944 Removing: /var/run/dpdk/spdk_pid761233 00:29:45.944 Removing: /var/run/dpdk/spdk_pid764294 00:29:45.944 Removing: /var/run/dpdk/spdk_pid765842 00:29:45.944 Removing: /var/run/dpdk/spdk_pid767785 00:29:45.944 Removing: /var/run/dpdk/spdk_pid768548 00:29:45.944 Removing: /var/run/dpdk/spdk_pid769994 00:29:45.944 Removing: /var/run/dpdk/spdk_pid770770 00:29:45.944 Removing: /var/run/dpdk/spdk_pid776275 00:29:45.944 Removing: /var/run/dpdk/spdk_pid776679 00:29:45.944 Removing: /var/run/dpdk/spdk_pid777092 00:29:45.944 Removing: /var/run/dpdk/spdk_pid778575 00:29:45.944 Removing: /var/run/dpdk/spdk_pid778976 00:29:45.944 Removing: /var/run/dpdk/spdk_pid779399 00:29:45.944 Clean 00:29:46.202 killing process with pid 480909 00:29:54.322 killing process with pid 480906 00:29:54.322 killing process with pid 480908 00:29:54.322 killing process with pid 480907 00:29:54.322 01:37:45 -- common/autotest_common.sh@1436 -- # return 0 00:29:54.322 01:37:45 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:29:54.322 01:37:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:54.322 01:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:54.322 01:37:45 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:29:54.322 01:37:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:54.322 01:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:54.322 01:37:45 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:54.322 01:37:45 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:54.322 01:37:45 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:54.322 01:37:45 -- spdk/autotest.sh@394 -- # hash lcov 00:29:54.322 01:37:45 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:54.322 01:37:45 -- spdk/autotest.sh@396 -- # hostname 00:29:54.322 01:37:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:54.322 geninfo: WARNING: invalid characters removed from testname! 00:30:20.884 01:38:10 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:22.266 01:38:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:24.833 01:38:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:28.127 01:38:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:30.667 01:38:21 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:33.204 01:38:24 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:35.741 01:38:27 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:35.741 01:38:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.741 01:38:27 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:35.741 01:38:27 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.741 01:38:27 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.742 01:38:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.742 01:38:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.742 01:38:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.742 01:38:27 -- paths/export.sh@5 -- $ export PATH 00:30:35.742 01:38:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.742 01:38:27 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:35.742 01:38:27 -- common/autobuild_common.sh@438 -- $ date +%s 00:30:35.742 01:38:27 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1722037107.XXXXXX 00:30:35.742 01:38:27 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1722037107.ED1JwU 00:30:35.742 01:38:27 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:30:35.742 01:38:27 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:30:35.742 01:38:27 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:35.742 01:38:27 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:35.742 01:38:27 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:35.742 01:38:27 -- common/autobuild_common.sh@454 -- $ get_config_params 00:30:35.742 01:38:27 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:35.742 01:38:27 -- common/autotest_common.sh@10 -- $ set +x 00:30:35.742 01:38:27 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:30:35.742 01:38:27 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:35.742 01:38:27 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:35.742 01:38:27 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:35.742 01:38:27 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:35.742 01:38:27 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:35.742 01:38:27 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:35.742 01:38:27 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:35.742 01:38:27 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:35.742 01:38:27 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:35.742 01:38:27 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:35.742 + [[ -n 438097 ]] 00:30:35.742 + sudo kill 438097 00:30:35.752 [Pipeline] } 00:30:35.769 [Pipeline] // stage 00:30:35.775 [Pipeline] } 00:30:35.791 [Pipeline] // timeout 00:30:35.796 [Pipeline] } 00:30:35.812 [Pipeline] // catchError 00:30:35.818 [Pipeline] } 00:30:35.835 [Pipeline] // wrap 00:30:35.841 [Pipeline] } 00:30:35.856 [Pipeline] // catchError 00:30:35.865 [Pipeline] stage 00:30:35.868 [Pipeline] { (Epilogue) 00:30:35.882 [Pipeline] catchError 00:30:35.883 [Pipeline] { 00:30:35.898 [Pipeline] echo 00:30:35.900 Cleanup processes 00:30:35.906 [Pipeline] sh 00:30:36.192 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:36.192 791094 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:36.206 [Pipeline] sh 00:30:36.491 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:36.491 ++ grep -v 'sudo pgrep' 00:30:36.491 ++ awk '{print $1}' 00:30:36.492 + sudo kill -9 00:30:36.492 + true 00:30:36.504 [Pipeline] sh 00:30:36.788 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:46.808 [Pipeline] sh 00:30:47.094 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:47.094 Artifacts sizes are good 00:30:47.109 [Pipeline] archiveArtifacts 00:30:47.117 Archiving artifacts 00:30:47.327 [Pipeline] sh 00:30:47.611 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:47.627 [Pipeline] cleanWs 00:30:47.637 [WS-CLEANUP] Deleting project workspace... 00:30:47.637 [WS-CLEANUP] Deferred wipeout is used... 00:30:47.646 [WS-CLEANUP] done 00:30:47.647 [Pipeline] } 00:30:47.667 [Pipeline] // catchError 00:30:47.679 [Pipeline] sh 00:30:47.961 + logger -p user.info -t JENKINS-CI 00:30:47.969 [Pipeline] } 00:30:47.985 [Pipeline] // stage 00:30:47.991 [Pipeline] } 00:30:48.008 [Pipeline] // node 00:30:48.013 [Pipeline] End of Pipeline 00:30:48.074 Finished: SUCCESS